00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 604 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3266 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.088 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.091 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.095 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.109 Fetching changes from the remote Git repository 00:00:00.111 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.134 Using shallow fetch with depth 1 00:00:00.134 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.134 > git --version # timeout=10 00:00:00.158 > git --version # 'git version 2.39.2' 00:00:00.158 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.180 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.180 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.038 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.049 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.060 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.060 > git config core.sparsecheckout # timeout=10 00:00:04.069 > git read-tree -mu HEAD # timeout=10 00:00:04.083 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.100 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.100 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.178 [Pipeline] Start of Pipeline 00:00:04.193 [Pipeline] library 00:00:04.195 Loading library shm_lib@master 00:00:04.195 Library shm_lib@master is cached. Copying from home. 00:00:04.215 [Pipeline] node 00:00:04.223 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.225 [Pipeline] { 00:00:04.235 [Pipeline] catchError 00:00:04.237 [Pipeline] { 00:00:04.251 [Pipeline] wrap 00:00:04.262 [Pipeline] { 00:00:04.271 [Pipeline] stage 00:00:04.273 [Pipeline] { (Prologue) 00:00:04.488 [Pipeline] sh 00:00:04.771 + logger -p user.info -t JENKINS-CI 00:00:04.791 [Pipeline] echo 00:00:04.792 Node: GP11 00:00:04.800 [Pipeline] sh 00:00:05.095 [Pipeline] setCustomBuildProperty 00:00:05.105 [Pipeline] echo 00:00:05.107 Cleanup processes 00:00:05.111 [Pipeline] sh 00:00:05.392 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.392 1345882 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.403 [Pipeline] sh 00:00:05.685 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.685 ++ grep -v 'sudo pgrep' 00:00:05.685 ++ awk '{print $1}' 00:00:05.685 + sudo kill -9 00:00:05.685 + true 00:00:05.698 [Pipeline] cleanWs 00:00:05.706 [WS-CLEANUP] Deleting project workspace... 00:00:05.706 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.711 [WS-CLEANUP] done 00:00:05.713 [Pipeline] setCustomBuildProperty 00:00:05.723 [Pipeline] sh 00:00:06.000 + sudo git config --global --replace-all safe.directory '*' 00:00:06.091 [Pipeline] httpRequest 00:00:06.127 [Pipeline] echo 00:00:06.128 Sorcerer 10.211.164.101 is alive 00:00:06.134 [Pipeline] httpRequest 00:00:06.137 HttpMethod: GET 00:00:06.138 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.138 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.158 Response Code: HTTP/1.1 200 OK 00:00:06.158 Success: Status code 200 is in the accepted range: 200,404 00:00:06.159 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:27.734 [Pipeline] sh 00:00:28.013 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:28.028 [Pipeline] httpRequest 00:00:28.044 [Pipeline] echo 00:00:28.046 Sorcerer 10.211.164.101 is alive 00:00:28.053 [Pipeline] httpRequest 00:00:28.057 HttpMethod: GET 00:00:28.058 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:28.058 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:28.071 Response Code: HTTP/1.1 200 OK 00:00:28.072 Success: Status code 200 is in the accepted range: 200,404 00:00:28.072 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:04.679 [Pipeline] sh 00:01:04.969 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:07.519 [Pipeline] sh 00:01:07.811 + git -C spdk log --oneline -n5 00:01:07.811 719d03c6a sock/uring: only register net impl if supported 00:01:07.811 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:07.811 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:07.811 6c7c1f57e accel: add sequence outstanding stat 00:01:07.811 3bc8e6a26 accel: add utility to put task 00:01:07.829 [Pipeline] withCredentials 00:01:07.840 > git --version # timeout=10 00:01:07.851 > git --version # 'git version 2.39.2' 00:01:07.868 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:07.870 [Pipeline] { 00:01:07.878 [Pipeline] retry 00:01:07.880 [Pipeline] { 00:01:07.896 [Pipeline] sh 00:01:08.179 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:09.578 [Pipeline] } 00:01:09.600 [Pipeline] // retry 00:01:09.604 [Pipeline] } 00:01:09.624 [Pipeline] // withCredentials 00:01:09.633 [Pipeline] httpRequest 00:01:09.651 [Pipeline] echo 00:01:09.653 Sorcerer 10.211.164.101 is alive 00:01:09.662 [Pipeline] httpRequest 00:01:09.667 HttpMethod: GET 00:01:09.667 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:09.668 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:09.674 Response Code: HTTP/1.1 200 OK 00:01:09.675 Success: Status code 200 is in the accepted range: 200,404 00:01:09.675 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:22.275 [Pipeline] sh 00:01:22.560 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:24.473 [Pipeline] sh 00:01:24.760 + git -C dpdk log --oneline -n5 00:01:24.760 eeb0605f11 version: 23.11.0 00:01:24.760 238778122a doc: update release notes for 23.11 00:01:24.760 46aa6b3cfc doc: fix description of RSS features 00:01:24.760 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:24.760 7e421ae345 devtools: support skipping forbid rule check 00:01:24.772 [Pipeline] } 00:01:24.790 [Pipeline] // stage 00:01:24.800 [Pipeline] stage 00:01:24.802 [Pipeline] { (Prepare) 00:01:24.826 [Pipeline] writeFile 00:01:24.843 [Pipeline] sh 00:01:25.138 + logger -p user.info -t JENKINS-CI 00:01:25.151 [Pipeline] sh 00:01:25.438 + logger -p user.info -t JENKINS-CI 00:01:25.451 [Pipeline] sh 00:01:25.735 + cat autorun-spdk.conf 00:01:25.735 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.735 SPDK_TEST_NVMF=1 00:01:25.735 SPDK_TEST_NVME_CLI=1 00:01:25.735 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.735 SPDK_TEST_NVMF_NICS=e810 00:01:25.735 SPDK_TEST_VFIOUSER=1 00:01:25.735 SPDK_RUN_UBSAN=1 00:01:25.735 NET_TYPE=phy 00:01:25.735 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:25.735 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.743 RUN_NIGHTLY=1 00:01:25.748 [Pipeline] readFile 00:01:25.779 [Pipeline] withEnv 00:01:25.782 [Pipeline] { 00:01:25.799 [Pipeline] sh 00:01:26.086 + set -ex 00:01:26.086 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:26.086 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:26.086 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.086 ++ SPDK_TEST_NVMF=1 00:01:26.086 ++ SPDK_TEST_NVME_CLI=1 00:01:26.086 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.086 ++ SPDK_TEST_NVMF_NICS=e810 00:01:26.086 ++ SPDK_TEST_VFIOUSER=1 00:01:26.086 ++ SPDK_RUN_UBSAN=1 00:01:26.086 ++ NET_TYPE=phy 00:01:26.086 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:26.086 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:26.086 ++ RUN_NIGHTLY=1 00:01:26.086 + case $SPDK_TEST_NVMF_NICS in 00:01:26.086 + DRIVERS=ice 00:01:26.086 + [[ tcp == \r\d\m\a ]] 00:01:26.086 + [[ -n ice ]] 00:01:26.086 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:26.086 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:26.086 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:26.086 rmmod: ERROR: Module irdma is not currently loaded 00:01:26.086 rmmod: ERROR: Module i40iw is not currently loaded 00:01:26.086 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:26.086 + true 00:01:26.086 + for D in $DRIVERS 00:01:26.086 + sudo modprobe ice 00:01:26.086 + exit 0 00:01:26.097 [Pipeline] } 00:01:26.118 [Pipeline] // withEnv 00:01:26.123 [Pipeline] } 00:01:26.141 [Pipeline] // stage 00:01:26.153 [Pipeline] catchError 00:01:26.156 [Pipeline] { 00:01:26.173 [Pipeline] timeout 00:01:26.174 Timeout set to expire in 50 min 00:01:26.176 [Pipeline] { 00:01:26.192 [Pipeline] stage 00:01:26.194 [Pipeline] { (Tests) 00:01:26.210 [Pipeline] sh 00:01:26.508 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.508 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.508 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.508 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:26.508 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:26.508 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:26.508 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:26.508 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:26.508 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:26.508 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:26.508 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:26.508 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.508 + source /etc/os-release 00:01:26.508 ++ NAME='Fedora Linux' 00:01:26.508 ++ VERSION='38 (Cloud Edition)' 00:01:26.508 ++ ID=fedora 00:01:26.508 ++ VERSION_ID=38 00:01:26.508 ++ VERSION_CODENAME= 00:01:26.508 ++ PLATFORM_ID=platform:f38 00:01:26.508 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:26.508 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.508 ++ LOGO=fedora-logo-icon 00:01:26.508 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:26.508 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.508 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:26.508 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.508 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.508 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.508 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:26.508 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.508 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:26.508 ++ SUPPORT_END=2024-05-14 00:01:26.508 ++ VARIANT='Cloud Edition' 00:01:26.508 ++ VARIANT_ID=cloud 00:01:26.508 + uname -a 00:01:26.508 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:26.508 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:27.455 Hugepages 00:01:27.455 node hugesize free / total 00:01:27.455 node0 1048576kB 0 / 0 00:01:27.455 node0 2048kB 0 / 0 00:01:27.455 node1 1048576kB 0 / 0 00:01:27.455 node1 2048kB 0 / 0 00:01:27.455 00:01:27.455 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:27.455 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:27.455 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:27.455 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:27.455 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:27.455 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:27.455 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:27.455 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:27.455 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:27.455 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:27.455 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:27.455 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:27.455 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:27.455 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:27.455 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:27.455 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:27.455 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:27.455 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:27.455 + rm -f /tmp/spdk-ld-path 00:01:27.455 + source autorun-spdk.conf 00:01:27.455 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.455 ++ SPDK_TEST_NVMF=1 00:01:27.455 ++ SPDK_TEST_NVME_CLI=1 00:01:27.455 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.455 ++ SPDK_TEST_NVMF_NICS=e810 00:01:27.455 ++ SPDK_TEST_VFIOUSER=1 00:01:27.455 ++ SPDK_RUN_UBSAN=1 00:01:27.455 ++ NET_TYPE=phy 00:01:27.455 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:27.455 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.455 ++ RUN_NIGHTLY=1 00:01:27.455 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:27.455 + [[ -n '' ]] 00:01:27.455 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.455 + for M in /var/spdk/build-*-manifest.txt 00:01:27.455 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:27.455 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:27.455 + for M in /var/spdk/build-*-manifest.txt 00:01:27.455 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:27.455 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:27.455 ++ uname 00:01:27.455 + [[ Linux == \L\i\n\u\x ]] 00:01:27.455 + sudo dmesg -T 00:01:27.455 + sudo dmesg --clear 00:01:27.455 + dmesg_pid=1347222 00:01:27.455 + [[ Fedora Linux == FreeBSD ]] 00:01:27.455 + sudo dmesg -Tw 00:01:27.455 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.455 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.455 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:27.455 + [[ -x /usr/src/fio-static/fio ]] 00:01:27.455 + export FIO_BIN=/usr/src/fio-static/fio 00:01:27.455 + FIO_BIN=/usr/src/fio-static/fio 00:01:27.455 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:27.455 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:27.455 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:27.455 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.455 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.455 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:27.455 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.455 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.455 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:27.455 Test configuration: 00:01:27.455 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.455 SPDK_TEST_NVMF=1 00:01:27.455 SPDK_TEST_NVME_CLI=1 00:01:27.455 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.455 SPDK_TEST_NVMF_NICS=e810 00:01:27.455 SPDK_TEST_VFIOUSER=1 00:01:27.455 SPDK_RUN_UBSAN=1 00:01:27.455 NET_TYPE=phy 00:01:27.455 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:27.455 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.715 RUN_NIGHTLY=1 01:48:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:27.715 01:48:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:27.715 01:48:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:27.715 01:48:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:27.715 01:48:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.715 01:48:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.715 01:48:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.715 01:48:33 -- paths/export.sh@5 -- $ export PATH 00:01:27.715 01:48:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.715 01:48:33 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:27.715 01:48:33 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:27.715 01:48:33 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720914513.XXXXXX 00:01:27.715 01:48:33 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720914513.zo6DxJ 00:01:27.715 01:48:33 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:27.715 01:48:33 -- common/autobuild_common.sh@450 -- $ '[' -n v23.11 ']' 00:01:27.715 01:48:33 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.715 01:48:33 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:27.715 01:48:33 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:27.715 01:48:33 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.715 01:48:33 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:27.715 01:48:33 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:27.715 01:48:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.715 01:48:33 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:27.715 01:48:33 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:27.715 01:48:33 -- pm/common@17 -- $ local monitor 00:01:27.715 01:48:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.715 01:48:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.715 01:48:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.715 01:48:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.715 01:48:33 -- pm/common@21 -- $ date +%s 00:01:27.715 01:48:33 -- pm/common@25 -- $ sleep 1 00:01:27.715 01:48:33 -- pm/common@21 -- $ date +%s 00:01:27.715 01:48:33 -- pm/common@21 -- $ date +%s 00:01:27.715 01:48:33 -- pm/common@21 -- $ date +%s 00:01:27.715 01:48:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720914513 00:01:27.715 01:48:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720914513 00:01:27.715 01:48:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720914513 00:01:27.715 01:48:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720914513 00:01:27.715 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720914513_collect-vmstat.pm.log 00:01:27.715 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720914513_collect-cpu-load.pm.log 00:01:27.715 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720914513_collect-cpu-temp.pm.log 00:01:27.715 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720914513_collect-bmc-pm.bmc.pm.log 00:01:28.658 01:48:34 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:28.658 01:48:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:28.658 01:48:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:28.658 01:48:34 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.658 01:48:34 -- spdk/autobuild.sh@16 -- $ date -u 00:01:28.658 Sat Jul 13 11:48:34 PM UTC 2024 00:01:28.658 01:48:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:28.658 v24.09-pre-202-g719d03c6a 00:01:28.658 01:48:34 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:28.658 01:48:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:28.658 01:48:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:28.658 01:48:34 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:28.658 01:48:34 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:28.658 01:48:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.658 ************************************ 00:01:28.658 START TEST ubsan 00:01:28.658 ************************************ 00:01:28.658 01:48:34 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:28.658 using ubsan 00:01:28.658 00:01:28.658 real 0m0.000s 00:01:28.658 user 0m0.000s 00:01:28.658 sys 0m0.000s 00:01:28.658 01:48:34 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:28.658 01:48:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.658 ************************************ 00:01:28.658 END TEST ubsan 00:01:28.658 ************************************ 00:01:28.658 01:48:34 -- common/autotest_common.sh@1142 -- $ return 0 00:01:28.658 01:48:34 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:28.658 01:48:34 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:28.658 01:48:34 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:28.658 01:48:34 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:28.658 01:48:34 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:28.658 01:48:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.658 ************************************ 00:01:28.658 START TEST build_native_dpdk 00:01:28.658 ************************************ 00:01:28.658 01:48:34 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:28.658 eeb0605f11 version: 23.11.0 00:01:28.658 238778122a doc: update release notes for 23.11 00:01:28.658 46aa6b3cfc doc: fix description of RSS features 00:01:28.658 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:28.658 7e421ae345 devtools: support skipping forbid rule check 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:28.658 01:48:34 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:28.659 01:48:34 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:28.659 01:48:34 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:28.659 01:48:34 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:28.659 01:48:34 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:28.659 01:48:34 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:28.659 01:48:34 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:28.659 01:48:34 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:28.659 patching file config/rte_config.h 00:01:28.659 Hunk #1 succeeded at 60 (offset 1 line). 00:01:28.659 01:48:34 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:28.659 01:48:34 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:28.659 01:48:34 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:28.659 01:48:34 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:28.659 01:48:34 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:32.861 The Meson build system 00:01:32.861 Version: 1.3.1 00:01:32.861 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:32.861 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:32.861 Build type: native build 00:01:32.861 Program cat found: YES (/usr/bin/cat) 00:01:32.861 Project name: DPDK 00:01:32.861 Project version: 23.11.0 00:01:32.861 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:32.861 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:32.861 Host machine cpu family: x86_64 00:01:32.861 Host machine cpu: x86_64 00:01:32.861 Message: ## Building in Developer Mode ## 00:01:32.861 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:32.861 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:32.861 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:32.861 Program python3 found: YES (/usr/bin/python3) 00:01:32.861 Program cat found: YES (/usr/bin/cat) 00:01:32.861 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:32.861 Compiler for C supports arguments -march=native: YES 00:01:32.861 Checking for size of "void *" : 8 00:01:32.861 Checking for size of "void *" : 8 (cached) 00:01:32.861 Library m found: YES 00:01:32.861 Library numa found: YES 00:01:32.861 Has header "numaif.h" : YES 00:01:32.861 Library fdt found: NO 00:01:32.861 Library execinfo found: NO 00:01:32.861 Has header "execinfo.h" : YES 00:01:32.861 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:32.862 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:32.862 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:32.862 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:32.862 Run-time dependency openssl found: YES 3.0.9 00:01:32.862 Run-time dependency libpcap found: YES 1.10.4 00:01:32.862 Has header "pcap.h" with dependency libpcap: YES 00:01:32.862 Compiler for C supports arguments -Wcast-qual: YES 00:01:32.862 Compiler for C supports arguments -Wdeprecated: YES 00:01:32.862 Compiler for C supports arguments -Wformat: YES 00:01:32.862 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:32.862 Compiler for C supports arguments -Wformat-security: NO 00:01:32.862 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:32.862 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:32.862 Compiler for C supports arguments -Wnested-externs: YES 00:01:32.862 Compiler for C supports arguments -Wold-style-definition: YES 00:01:32.862 Compiler for C supports arguments -Wpointer-arith: YES 00:01:32.862 Compiler for C supports arguments -Wsign-compare: YES 00:01:32.862 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:32.862 Compiler for C supports arguments -Wundef: YES 00:01:32.862 Compiler for C supports arguments -Wwrite-strings: YES 00:01:32.862 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:32.862 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:32.862 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:32.862 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:32.862 Program objdump found: YES (/usr/bin/objdump) 00:01:32.862 Compiler for C supports arguments -mavx512f: YES 00:01:32.862 Checking if "AVX512 checking" compiles: YES 00:01:32.862 Fetching value of define "__SSE4_2__" : 1 00:01:32.862 Fetching value of define "__AES__" : 1 00:01:32.862 Fetching value of define "__AVX__" : 1 00:01:32.862 Fetching value of define "__AVX2__" : (undefined) 00:01:32.862 Fetching value of define "__AVX512BW__" : (undefined) 00:01:32.862 Fetching value of define "__AVX512CD__" : (undefined) 00:01:32.862 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:32.862 Fetching value of define "__AVX512F__" : (undefined) 00:01:32.862 Fetching value of define "__AVX512VL__" : (undefined) 00:01:32.862 Fetching value of define "__PCLMUL__" : 1 00:01:32.862 Fetching value of define "__RDRND__" : 1 00:01:32.862 Fetching value of define "__RDSEED__" : (undefined) 00:01:32.862 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:32.862 Fetching value of define "__znver1__" : (undefined) 00:01:32.862 Fetching value of define "__znver2__" : (undefined) 00:01:32.862 Fetching value of define "__znver3__" : (undefined) 00:01:32.862 Fetching value of define "__znver4__" : (undefined) 00:01:32.862 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:32.862 Message: lib/log: Defining dependency "log" 00:01:32.862 Message: lib/kvargs: Defining dependency "kvargs" 00:01:32.862 Message: lib/telemetry: Defining dependency "telemetry" 00:01:32.862 Checking for function "getentropy" : NO 00:01:32.862 Message: lib/eal: Defining dependency "eal" 00:01:32.862 Message: lib/ring: Defining dependency "ring" 00:01:32.862 Message: lib/rcu: Defining dependency "rcu" 00:01:32.862 Message: lib/mempool: Defining dependency "mempool" 00:01:32.862 Message: lib/mbuf: Defining dependency "mbuf" 00:01:32.862 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:32.862 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:32.862 Compiler for C supports arguments -mpclmul: YES 00:01:32.862 Compiler for C supports arguments -maes: YES 00:01:32.862 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:32.862 Compiler for C supports arguments -mavx512bw: YES 00:01:32.862 Compiler for C supports arguments -mavx512dq: YES 00:01:32.862 Compiler for C supports arguments -mavx512vl: YES 00:01:32.862 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:32.862 Compiler for C supports arguments -mavx2: YES 00:01:32.862 Compiler for C supports arguments -mavx: YES 00:01:32.862 Message: lib/net: Defining dependency "net" 00:01:32.862 Message: lib/meter: Defining dependency "meter" 00:01:32.862 Message: lib/ethdev: Defining dependency "ethdev" 00:01:32.862 Message: lib/pci: Defining dependency "pci" 00:01:32.862 Message: lib/cmdline: Defining dependency "cmdline" 00:01:32.862 Message: lib/metrics: Defining dependency "metrics" 00:01:32.862 Message: lib/hash: Defining dependency "hash" 00:01:32.862 Message: lib/timer: Defining dependency "timer" 00:01:32.862 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:32.862 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:32.862 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:32.862 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:32.862 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:32.862 Message: lib/acl: Defining dependency "acl" 00:01:32.862 Message: lib/bbdev: Defining dependency "bbdev" 00:01:32.862 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:32.862 Run-time dependency libelf found: YES 0.190 00:01:32.862 Message: lib/bpf: Defining dependency "bpf" 00:01:32.862 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:32.862 Message: lib/compressdev: Defining dependency "compressdev" 00:01:32.862 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:32.862 Message: lib/distributor: Defining dependency "distributor" 00:01:32.862 Message: lib/dmadev: Defining dependency "dmadev" 00:01:32.862 Message: lib/efd: Defining dependency "efd" 00:01:32.862 Message: lib/eventdev: Defining dependency "eventdev" 00:01:32.862 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:32.862 Message: lib/gpudev: Defining dependency "gpudev" 00:01:32.862 Message: lib/gro: Defining dependency "gro" 00:01:32.862 Message: lib/gso: Defining dependency "gso" 00:01:32.862 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:32.862 Message: lib/jobstats: Defining dependency "jobstats" 00:01:32.862 Message: lib/latencystats: Defining dependency "latencystats" 00:01:32.862 Message: lib/lpm: Defining dependency "lpm" 00:01:32.862 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:32.862 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:32.862 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:32.862 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:32.862 Message: lib/member: Defining dependency "member" 00:01:32.862 Message: lib/pcapng: Defining dependency "pcapng" 00:01:32.862 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:32.862 Message: lib/power: Defining dependency "power" 00:01:32.862 Message: lib/rawdev: Defining dependency "rawdev" 00:01:32.862 Message: lib/regexdev: Defining dependency "regexdev" 00:01:32.862 Message: lib/mldev: Defining dependency "mldev" 00:01:32.862 Message: lib/rib: Defining dependency "rib" 00:01:32.862 Message: lib/reorder: Defining dependency "reorder" 00:01:32.862 Message: lib/sched: Defining dependency "sched" 00:01:32.862 Message: lib/security: Defining dependency "security" 00:01:32.862 Message: lib/stack: Defining dependency "stack" 00:01:32.862 Has header "linux/userfaultfd.h" : YES 00:01:32.862 Has header "linux/vduse.h" : YES 00:01:32.862 Message: lib/vhost: Defining dependency "vhost" 00:01:32.862 Message: lib/ipsec: Defining dependency "ipsec" 00:01:32.862 Message: lib/pdcp: Defining dependency "pdcp" 00:01:32.862 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:32.862 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:32.862 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:32.862 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:32.862 Message: lib/fib: Defining dependency "fib" 00:01:32.862 Message: lib/port: Defining dependency "port" 00:01:32.862 Message: lib/pdump: Defining dependency "pdump" 00:01:32.862 Message: lib/table: Defining dependency "table" 00:01:32.862 Message: lib/pipeline: Defining dependency "pipeline" 00:01:32.862 Message: lib/graph: Defining dependency "graph" 00:01:32.862 Message: lib/node: Defining dependency "node" 00:01:34.247 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:34.247 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:34.247 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:34.247 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:34.247 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:34.247 Compiler for C supports arguments -Wno-unused-value: YES 00:01:34.247 Compiler for C supports arguments -Wno-format: YES 00:01:34.247 Compiler for C supports arguments -Wno-format-security: YES 00:01:34.247 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:34.247 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:34.247 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:34.247 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:34.247 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:34.247 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:34.247 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:34.247 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:34.247 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:34.247 Has header "sys/epoll.h" : YES 00:01:34.247 Program doxygen found: YES (/usr/bin/doxygen) 00:01:34.247 Configuring doxy-api-html.conf using configuration 00:01:34.247 Configuring doxy-api-man.conf using configuration 00:01:34.247 Program mandb found: YES (/usr/bin/mandb) 00:01:34.247 Program sphinx-build found: NO 00:01:34.247 Configuring rte_build_config.h using configuration 00:01:34.247 Message: 00:01:34.247 ================= 00:01:34.247 Applications Enabled 00:01:34.247 ================= 00:01:34.247 00:01:34.247 apps: 00:01:34.247 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:34.247 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:34.247 test-pmd, test-regex, test-sad, test-security-perf, 00:01:34.247 00:01:34.247 Message: 00:01:34.247 ================= 00:01:34.247 Libraries Enabled 00:01:34.247 ================= 00:01:34.247 00:01:34.247 libs: 00:01:34.247 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:34.247 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:34.247 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:34.247 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:34.247 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:34.247 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:34.247 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:34.247 00:01:34.247 00:01:34.247 Message: 00:01:34.247 =============== 00:01:34.247 Drivers Enabled 00:01:34.247 =============== 00:01:34.247 00:01:34.247 common: 00:01:34.247 00:01:34.247 bus: 00:01:34.247 pci, vdev, 00:01:34.247 mempool: 00:01:34.247 ring, 00:01:34.247 dma: 00:01:34.247 00:01:34.247 net: 00:01:34.247 i40e, 00:01:34.247 raw: 00:01:34.247 00:01:34.247 crypto: 00:01:34.247 00:01:34.247 compress: 00:01:34.247 00:01:34.247 regex: 00:01:34.247 00:01:34.247 ml: 00:01:34.247 00:01:34.247 vdpa: 00:01:34.247 00:01:34.247 event: 00:01:34.247 00:01:34.247 baseband: 00:01:34.247 00:01:34.247 gpu: 00:01:34.247 00:01:34.247 00:01:34.247 Message: 00:01:34.247 ================= 00:01:34.247 Content Skipped 00:01:34.247 ================= 00:01:34.247 00:01:34.247 apps: 00:01:34.247 00:01:34.247 libs: 00:01:34.247 00:01:34.247 drivers: 00:01:34.247 common/cpt: not in enabled drivers build config 00:01:34.247 common/dpaax: not in enabled drivers build config 00:01:34.247 common/iavf: not in enabled drivers build config 00:01:34.247 common/idpf: not in enabled drivers build config 00:01:34.247 common/mvep: not in enabled drivers build config 00:01:34.247 common/octeontx: not in enabled drivers build config 00:01:34.247 bus/auxiliary: not in enabled drivers build config 00:01:34.247 bus/cdx: not in enabled drivers build config 00:01:34.247 bus/dpaa: not in enabled drivers build config 00:01:34.247 bus/fslmc: not in enabled drivers build config 00:01:34.247 bus/ifpga: not in enabled drivers build config 00:01:34.247 bus/platform: not in enabled drivers build config 00:01:34.247 bus/vmbus: not in enabled drivers build config 00:01:34.247 common/cnxk: not in enabled drivers build config 00:01:34.247 common/mlx5: not in enabled drivers build config 00:01:34.247 common/nfp: not in enabled drivers build config 00:01:34.247 common/qat: not in enabled drivers build config 00:01:34.247 common/sfc_efx: not in enabled drivers build config 00:01:34.247 mempool/bucket: not in enabled drivers build config 00:01:34.247 mempool/cnxk: not in enabled drivers build config 00:01:34.247 mempool/dpaa: not in enabled drivers build config 00:01:34.247 mempool/dpaa2: not in enabled drivers build config 00:01:34.247 mempool/octeontx: not in enabled drivers build config 00:01:34.247 mempool/stack: not in enabled drivers build config 00:01:34.247 dma/cnxk: not in enabled drivers build config 00:01:34.247 dma/dpaa: not in enabled drivers build config 00:01:34.247 dma/dpaa2: not in enabled drivers build config 00:01:34.247 dma/hisilicon: not in enabled drivers build config 00:01:34.247 dma/idxd: not in enabled drivers build config 00:01:34.247 dma/ioat: not in enabled drivers build config 00:01:34.247 dma/skeleton: not in enabled drivers build config 00:01:34.247 net/af_packet: not in enabled drivers build config 00:01:34.247 net/af_xdp: not in enabled drivers build config 00:01:34.247 net/ark: not in enabled drivers build config 00:01:34.247 net/atlantic: not in enabled drivers build config 00:01:34.247 net/avp: not in enabled drivers build config 00:01:34.247 net/axgbe: not in enabled drivers build config 00:01:34.247 net/bnx2x: not in enabled drivers build config 00:01:34.247 net/bnxt: not in enabled drivers build config 00:01:34.247 net/bonding: not in enabled drivers build config 00:01:34.247 net/cnxk: not in enabled drivers build config 00:01:34.247 net/cpfl: not in enabled drivers build config 00:01:34.247 net/cxgbe: not in enabled drivers build config 00:01:34.247 net/dpaa: not in enabled drivers build config 00:01:34.247 net/dpaa2: not in enabled drivers build config 00:01:34.247 net/e1000: not in enabled drivers build config 00:01:34.247 net/ena: not in enabled drivers build config 00:01:34.247 net/enetc: not in enabled drivers build config 00:01:34.247 net/enetfec: not in enabled drivers build config 00:01:34.247 net/enic: not in enabled drivers build config 00:01:34.247 net/failsafe: not in enabled drivers build config 00:01:34.247 net/fm10k: not in enabled drivers build config 00:01:34.247 net/gve: not in enabled drivers build config 00:01:34.247 net/hinic: not in enabled drivers build config 00:01:34.247 net/hns3: not in enabled drivers build config 00:01:34.247 net/iavf: not in enabled drivers build config 00:01:34.247 net/ice: not in enabled drivers build config 00:01:34.247 net/idpf: not in enabled drivers build config 00:01:34.247 net/igc: not in enabled drivers build config 00:01:34.247 net/ionic: not in enabled drivers build config 00:01:34.247 net/ipn3ke: not in enabled drivers build config 00:01:34.247 net/ixgbe: not in enabled drivers build config 00:01:34.247 net/mana: not in enabled drivers build config 00:01:34.247 net/memif: not in enabled drivers build config 00:01:34.247 net/mlx4: not in enabled drivers build config 00:01:34.247 net/mlx5: not in enabled drivers build config 00:01:34.247 net/mvneta: not in enabled drivers build config 00:01:34.247 net/mvpp2: not in enabled drivers build config 00:01:34.247 net/netvsc: not in enabled drivers build config 00:01:34.247 net/nfb: not in enabled drivers build config 00:01:34.247 net/nfp: not in enabled drivers build config 00:01:34.247 net/ngbe: not in enabled drivers build config 00:01:34.247 net/null: not in enabled drivers build config 00:01:34.247 net/octeontx: not in enabled drivers build config 00:01:34.247 net/octeon_ep: not in enabled drivers build config 00:01:34.247 net/pcap: not in enabled drivers build config 00:01:34.247 net/pfe: not in enabled drivers build config 00:01:34.247 net/qede: not in enabled drivers build config 00:01:34.247 net/ring: not in enabled drivers build config 00:01:34.247 net/sfc: not in enabled drivers build config 00:01:34.247 net/softnic: not in enabled drivers build config 00:01:34.247 net/tap: not in enabled drivers build config 00:01:34.247 net/thunderx: not in enabled drivers build config 00:01:34.247 net/txgbe: not in enabled drivers build config 00:01:34.247 net/vdev_netvsc: not in enabled drivers build config 00:01:34.247 net/vhost: not in enabled drivers build config 00:01:34.247 net/virtio: not in enabled drivers build config 00:01:34.247 net/vmxnet3: not in enabled drivers build config 00:01:34.247 raw/cnxk_bphy: not in enabled drivers build config 00:01:34.247 raw/cnxk_gpio: not in enabled drivers build config 00:01:34.247 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:34.247 raw/ifpga: not in enabled drivers build config 00:01:34.247 raw/ntb: not in enabled drivers build config 00:01:34.247 raw/skeleton: not in enabled drivers build config 00:01:34.247 crypto/armv8: not in enabled drivers build config 00:01:34.247 crypto/bcmfs: not in enabled drivers build config 00:01:34.247 crypto/caam_jr: not in enabled drivers build config 00:01:34.247 crypto/ccp: not in enabled drivers build config 00:01:34.247 crypto/cnxk: not in enabled drivers build config 00:01:34.247 crypto/dpaa_sec: not in enabled drivers build config 00:01:34.247 crypto/dpaa2_sec: not in enabled drivers build config 00:01:34.247 crypto/ipsec_mb: not in enabled drivers build config 00:01:34.247 crypto/mlx5: not in enabled drivers build config 00:01:34.247 crypto/mvsam: not in enabled drivers build config 00:01:34.247 crypto/nitrox: not in enabled drivers build config 00:01:34.247 crypto/null: not in enabled drivers build config 00:01:34.247 crypto/octeontx: not in enabled drivers build config 00:01:34.247 crypto/openssl: not in enabled drivers build config 00:01:34.247 crypto/scheduler: not in enabled drivers build config 00:01:34.247 crypto/uadk: not in enabled drivers build config 00:01:34.247 crypto/virtio: not in enabled drivers build config 00:01:34.247 compress/isal: not in enabled drivers build config 00:01:34.247 compress/mlx5: not in enabled drivers build config 00:01:34.247 compress/octeontx: not in enabled drivers build config 00:01:34.248 compress/zlib: not in enabled drivers build config 00:01:34.248 regex/mlx5: not in enabled drivers build config 00:01:34.248 regex/cn9k: not in enabled drivers build config 00:01:34.248 ml/cnxk: not in enabled drivers build config 00:01:34.248 vdpa/ifc: not in enabled drivers build config 00:01:34.248 vdpa/mlx5: not in enabled drivers build config 00:01:34.248 vdpa/nfp: not in enabled drivers build config 00:01:34.248 vdpa/sfc: not in enabled drivers build config 00:01:34.248 event/cnxk: not in enabled drivers build config 00:01:34.248 event/dlb2: not in enabled drivers build config 00:01:34.248 event/dpaa: not in enabled drivers build config 00:01:34.248 event/dpaa2: not in enabled drivers build config 00:01:34.248 event/dsw: not in enabled drivers build config 00:01:34.248 event/opdl: not in enabled drivers build config 00:01:34.248 event/skeleton: not in enabled drivers build config 00:01:34.248 event/sw: not in enabled drivers build config 00:01:34.248 event/octeontx: not in enabled drivers build config 00:01:34.248 baseband/acc: not in enabled drivers build config 00:01:34.248 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:34.248 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:34.248 baseband/la12xx: not in enabled drivers build config 00:01:34.248 baseband/null: not in enabled drivers build config 00:01:34.248 baseband/turbo_sw: not in enabled drivers build config 00:01:34.248 gpu/cuda: not in enabled drivers build config 00:01:34.248 00:01:34.248 00:01:34.248 Build targets in project: 220 00:01:34.248 00:01:34.248 DPDK 23.11.0 00:01:34.248 00:01:34.248 User defined options 00:01:34.248 libdir : lib 00:01:34.248 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:34.248 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:34.248 c_link_args : 00:01:34.248 enable_docs : false 00:01:34.248 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:34.248 enable_kmods : false 00:01:34.248 machine : native 00:01:34.248 tests : false 00:01:34.248 00:01:34.248 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:34.248 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:34.248 01:48:39 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:34.248 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:34.248 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:34.248 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:34.248 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:34.248 [4/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:34.248 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:34.248 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:34.248 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:34.248 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:34.248 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:34.248 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:34.248 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:34.248 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:34.248 [13/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:34.511 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:34.511 [15/710] Linking static target lib/librte_kvargs.a 00:01:34.511 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:34.511 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:34.511 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:34.511 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:34.511 [20/710] Linking static target lib/librte_log.a 00:01:34.774 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:34.774 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.347 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:35.347 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:35.347 [25/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.347 [26/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:35.347 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:35.347 [28/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:35.347 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:35.347 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:35.347 [31/710] Linking target lib/librte_log.so.24.0 00:01:35.347 [32/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:35.347 [33/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:35.347 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:35.347 [35/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:35.347 [36/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:35.347 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:35.347 [38/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:35.347 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:35.347 [40/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:35.347 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:35.347 [42/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:35.347 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:35.609 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:35.609 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:35.609 [46/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:35.609 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:35.609 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:35.609 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:35.609 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:35.609 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:35.609 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:35.609 [53/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:35.609 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:35.609 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:35.609 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:35.609 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:35.609 [58/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:35.609 [59/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:35.609 [60/710] Linking target lib/librte_kvargs.so.24.0 00:01:35.609 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:35.609 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:35.870 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:35.870 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:35.870 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:35.870 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:36.136 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:36.136 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:36.136 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:36.136 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:36.136 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:36.136 [72/710] Linking static target lib/librte_pci.a 00:01:36.136 [73/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:36.136 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:36.395 [75/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:36.395 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:36.395 [77/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:36.395 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:36.395 [79/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:36.395 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:36.395 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:36.395 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:36.395 [83/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.660 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:36.660 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:36.660 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:36.660 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:36.660 [88/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:36.660 [89/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:36.660 [90/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:36.660 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:36.660 [92/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:36.660 [93/710] Linking static target lib/librte_ring.a 00:01:36.660 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:36.660 [95/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:36.660 [96/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:36.660 [97/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:36.660 [98/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:36.660 [99/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:36.660 [100/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:36.660 [101/710] Linking static target lib/librte_meter.a 00:01:36.660 [102/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:36.924 [103/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:36.924 [104/710] Linking static target lib/librte_telemetry.a 00:01:36.924 [105/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:36.924 [106/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:36.924 [107/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:36.924 [108/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:36.924 [109/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:36.924 [110/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:36.924 [111/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:36.924 [112/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.186 [113/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:37.186 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:37.186 [115/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.186 [116/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:37.186 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:37.186 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:37.186 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:37.186 [120/710] Linking static target lib/librte_eal.a 00:01:37.186 [121/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:37.186 [122/710] Linking static target lib/librte_net.a 00:01:37.186 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:37.451 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:37.451 [125/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:37.451 [126/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:37.451 [127/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:37.451 [128/710] Linking static target lib/librte_mempool.a 00:01:37.451 [129/710] Linking static target lib/librte_cmdline.a 00:01:37.451 [130/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.451 [131/710] Linking target lib/librte_telemetry.so.24.0 00:01:37.710 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:37.710 [133/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:37.710 [134/710] Linking static target lib/librte_cfgfile.a 00:01:37.710 [135/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.710 [136/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:37.710 [137/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:37.710 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:37.710 [139/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:37.710 [140/710] Linking static target lib/librte_metrics.a 00:01:37.710 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:37.710 [142/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:37.710 [143/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:38.005 [144/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:38.005 [145/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:38.005 [146/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:38.005 [147/710] Linking static target lib/librte_bitratestats.a 00:01:38.005 [148/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:38.270 [149/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:38.270 [150/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:38.270 [151/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:38.270 [152/710] Linking static target lib/librte_rcu.a 00:01:38.270 [153/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.270 [154/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:38.270 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:38.270 [156/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.270 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:38.270 [158/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.270 [159/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:38.533 [160/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:38.533 [161/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.533 [162/710] Linking static target lib/librte_timer.a 00:01:38.533 [163/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:38.533 [164/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:38.533 [165/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:38.533 [166/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.533 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:38.792 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:38.792 [169/710] Linking static target lib/librte_bbdev.a 00:01:38.792 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:38.792 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.792 [172/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:38.792 [173/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:39.055 [174/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:39.055 [175/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:39.055 [176/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:39.055 [177/710] Linking static target lib/librte_compressdev.a 00:01:39.055 [178/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.056 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:39.056 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:39.319 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:39.319 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:39.319 [183/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:39.319 [184/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:39.319 [185/710] Linking static target lib/librte_distributor.a 00:01:39.584 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:39.584 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.584 [188/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:39.584 [189/710] Linking static target lib/librte_dmadev.a 00:01:39.584 [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:39.584 [191/710] Linking static target lib/librte_bpf.a 00:01:39.845 [192/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.845 [193/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:39.845 [194/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:39.845 [195/710] Linking static target lib/librte_dispatcher.a 00:01:39.845 [196/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:39.845 [197/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.845 [198/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:39.845 [199/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:39.845 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:40.110 [201/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:40.110 [202/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:40.110 [203/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:40.110 [204/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:40.110 [205/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:40.110 [206/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:40.110 [207/710] Linking static target lib/librte_gpudev.a 00:01:40.110 [208/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:40.110 [209/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:40.110 [210/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:40.110 [211/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:40.110 [212/710] Linking static target lib/librte_gro.a 00:01:40.110 [213/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:40.110 [214/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.372 [215/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:40.372 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.372 [217/710] Linking static target lib/librte_jobstats.a 00:01:40.372 [218/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:40.372 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:40.635 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:40.635 [221/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.635 [222/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.635 [223/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:40.635 [224/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.635 [225/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:40.901 [226/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:40.901 [227/710] Linking static target lib/librte_latencystats.a 00:01:40.901 [228/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:40.901 [229/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:40.901 [230/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:40.901 [231/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:40.901 [232/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:40.901 [233/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:41.162 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:41.162 [235/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:41.162 [236/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:41.162 [237/710] Linking static target lib/librte_ip_frag.a 00:01:41.162 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.162 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:41.162 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:41.427 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:41.427 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:41.427 [243/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.427 [244/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:41.690 [245/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:41.690 [246/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:41.690 [247/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:41.690 [248/710] Linking static target lib/librte_gso.a 00:01:41.690 [249/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.690 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:41.690 [251/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:41.690 [252/710] Linking static target lib/librte_regexdev.a 00:01:41.953 [253/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:41.953 [254/710] Linking static target lib/librte_rawdev.a 00:01:41.953 [255/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:41.953 [256/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:41.953 [257/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:41.953 [258/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:41.953 [259/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.953 [260/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:41.953 [261/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:42.213 [262/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:42.213 [263/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:42.213 [264/710] Linking static target lib/librte_pcapng.a 00:01:42.213 [265/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:42.213 [266/710] Linking static target lib/librte_efd.a 00:01:42.213 [267/710] Linking static target lib/librte_mldev.a 00:01:42.213 [268/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:42.213 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:42.213 [270/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:42.213 [271/710] Linking static target lib/acl/libavx2_tmp.a 00:01:42.213 [272/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:42.475 [273/710] Linking static target lib/librte_stack.a 00:01:42.475 [274/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:42.475 [275/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:42.475 [276/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:42.475 [277/710] Linking static target lib/librte_lpm.a 00:01:42.475 [278/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:42.475 [279/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:42.475 [280/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:42.475 [281/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.475 [282/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.475 [283/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:42.475 [284/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.740 [285/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.740 [286/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:42.740 [287/710] Linking static target lib/librte_hash.a 00:01:42.740 [288/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:42.740 [289/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:42.740 [290/710] Linking static target lib/librte_reorder.a 00:01:42.740 [291/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:42.740 [292/710] Linking static target lib/librte_power.a 00:01:43.000 [293/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.000 [294/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:43.000 [295/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:43.000 [296/710] Linking static target lib/acl/libavx512_tmp.a 00:01:43.000 [297/710] Linking static target lib/librte_acl.a 00:01:43.000 [298/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.000 [299/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:43.000 [300/710] Linking static target lib/librte_security.a 00:01:43.262 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:43.262 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:43.262 [303/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:43.262 [304/710] Linking static target lib/librte_mbuf.a 00:01:43.262 [305/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:43.262 [306/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:43.262 [307/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.262 [308/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:43.262 [309/710] Linking static target lib/librte_rib.a 00:01:43.262 [310/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:43.262 [311/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:43.262 [312/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.525 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:43.525 [314/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:43.525 [315/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:43.525 [316/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:43.525 [317/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.525 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.788 [319/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:43.788 [320/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:43.788 [321/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:43.788 [322/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:43.788 [323/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:43.788 [324/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:43.788 [325/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:43.788 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.048 [327/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:44.048 [328/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:44.048 [329/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.048 [330/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.048 [331/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.317 [332/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:44.317 [333/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:44.576 [334/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:44.576 [335/710] Linking static target lib/librte_member.a 00:01:44.576 [336/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:44.576 [337/710] Linking static target lib/librte_eventdev.a 00:01:44.576 [338/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:44.835 [339/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:44.835 [340/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:44.835 [341/710] Linking static target lib/librte_cryptodev.a 00:01:44.835 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:44.835 [343/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:44.835 [344/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:44.835 [345/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:44.835 [346/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:44.835 [347/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:44.835 [348/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:44.835 [349/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:44.835 [350/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:45.096 [351/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:45.096 [352/710] Linking static target lib/librte_ethdev.a 00:01:45.096 [353/710] Linking static target lib/librte_sched.a 00:01:45.096 [354/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:45.096 [355/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.096 [356/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:45.096 [357/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:45.096 [358/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:45.096 [359/710] Linking static target lib/librte_fib.a 00:01:45.096 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:45.361 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:45.361 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:45.361 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:45.361 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:45.361 [365/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:45.361 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:45.622 [367/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:45.622 [368/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.622 [369/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:45.622 [370/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:45.622 [371/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.622 [372/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:45.622 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:45.885 [374/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:45.885 [375/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:45.885 [376/710] Linking static target lib/librte_pdump.a 00:01:46.149 [377/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:46.149 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:46.149 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:46.149 [380/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:46.149 [381/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:46.149 [382/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:46.149 [383/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:46.149 [384/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:46.416 [385/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:46.416 [386/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:46.416 [387/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:46.416 [388/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:46.416 [389/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.416 [390/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:46.416 [391/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:46.675 [392/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:46.675 [393/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:46.675 [394/710] Linking static target lib/librte_ipsec.a 00:01:46.675 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:46.675 [396/710] Linking static target lib/librte_table.a 00:01:46.936 [397/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.936 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:46.936 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:46.936 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:47.198 [401/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.198 [402/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:47.462 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:47.462 [404/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:47.462 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:47.462 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:47.462 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:47.462 [408/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:47.729 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:47.729 [410/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:47.729 [411/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:47.729 [412/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:47.729 [413/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.729 [414/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:47.992 [415/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:47.992 [416/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.992 [417/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:47.992 [418/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:47.992 [419/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:47.992 [420/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:48.251 [421/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.251 [422/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:48.251 [423/710] Linking static target drivers/librte_bus_vdev.a 00:01:48.251 [424/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:48.251 [425/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.251 [426/710] Linking static target lib/librte_port.a 00:01:48.251 [427/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:48.251 [428/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.516 [429/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:48.516 [430/710] Linking target lib/librte_eal.so.24.0 00:01:48.516 [431/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.516 [432/710] Linking static target drivers/librte_bus_pci.a 00:01:48.516 [433/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.516 [434/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:48.516 [435/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:48.516 [436/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.516 [437/710] Linking static target lib/librte_graph.a 00:01:48.780 [438/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:48.780 [439/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:48.780 [440/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:48.780 [441/710] Linking target lib/librte_ring.so.24.0 00:01:48.780 [442/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:48.780 [443/710] Linking target lib/librte_meter.so.24.0 00:01:49.046 [444/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:49.046 [445/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:49.046 [446/710] Linking target lib/librte_pci.so.24.0 00:01:49.046 [447/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:49.046 [448/710] Linking target lib/librte_rcu.so.24.0 00:01:49.046 [449/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.046 [450/710] Linking target lib/librte_mempool.so.24.0 00:01:49.046 [451/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:49.046 [452/710] Linking target lib/librte_timer.so.24.0 00:01:49.046 [453/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:49.304 [454/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:49.304 [455/710] Linking target lib/librte_acl.so.24.0 00:01:49.304 [456/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:49.304 [457/710] Linking target lib/librte_cfgfile.so.24.0 00:01:49.304 [458/710] Linking target lib/librte_dmadev.so.24.0 00:01:49.305 [459/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:49.305 [460/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.305 [461/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:49.305 [462/710] Linking target lib/librte_jobstats.so.24.0 00:01:49.305 [463/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:49.305 [464/710] Linking target lib/librte_rawdev.so.24.0 00:01:49.305 [465/710] Linking target lib/librte_stack.so.24.0 00:01:49.305 [466/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:49.305 [467/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:49.305 [468/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:49.305 [469/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:49.305 [470/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:49.571 [471/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:49.571 [472/710] Linking target lib/librte_mbuf.so.24.0 00:01:49.571 [473/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:49.571 [474/710] Linking target lib/librte_rib.so.24.0 00:01:49.571 [475/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:49.571 [476/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:49.571 [477/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:49.571 [478/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:49.571 [479/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:49.571 [480/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:49.571 [481/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:49.572 [482/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:49.572 [483/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:49.572 [484/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.572 [485/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:49.572 [486/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:49.572 [487/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:49.572 [488/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:49.572 [489/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:49.836 [490/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:49.836 [491/710] Linking target lib/librte_net.so.24.0 00:01:49.836 [492/710] Linking target lib/librte_bbdev.so.24.0 00:01:49.836 [493/710] Linking target lib/librte_compressdev.so.24.0 00:01:49.836 [494/710] Linking target lib/librte_distributor.so.24.0 00:01:49.836 [495/710] Linking target lib/librte_cryptodev.so.24.0 00:01:49.836 [496/710] Linking target lib/librte_gpudev.so.24.0 00:01:49.836 [497/710] Linking target lib/librte_regexdev.so.24.0 00:01:49.836 [498/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:49.836 [499/710] Linking target lib/librte_reorder.so.24.0 00:01:49.836 [500/710] Linking target lib/librte_mldev.so.24.0 00:01:49.836 [501/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.836 [502/710] Linking static target drivers/librte_mempool_ring.a 00:01:49.836 [503/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.836 [504/710] Linking target lib/librte_sched.so.24.0 00:01:49.836 [505/710] Linking target lib/librte_fib.so.24.0 00:01:49.836 [506/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:49.836 [507/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:49.836 [508/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:50.097 [509/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:50.097 [510/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:50.097 [511/710] Linking target lib/librte_cmdline.so.24.0 00:01:50.097 [512/710] Linking target lib/librte_hash.so.24.0 00:01:50.097 [513/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:50.097 [514/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:50.097 [515/710] Linking target lib/librte_security.so.24.0 00:01:50.097 [516/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:50.097 [517/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:50.362 [518/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:50.363 [519/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:50.363 [520/710] Linking target lib/librte_efd.so.24.0 00:01:50.363 [521/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:50.363 [522/710] Linking target lib/librte_lpm.so.24.0 00:01:50.363 [523/710] Linking target lib/librte_member.so.24.0 00:01:50.622 [524/710] Linking target lib/librte_ipsec.so.24.0 00:01:50.622 [525/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:50.622 [526/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:50.622 [527/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:50.622 [528/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:50.622 [529/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:50.622 [530/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:50.941 [531/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:50.941 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:50.941 [533/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:50.941 [534/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:51.234 [535/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:51.234 [536/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:51.234 [537/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:51.234 [538/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:51.234 [539/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:51.234 [540/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:51.234 [541/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:51.494 [542/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:51.494 [543/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:51.761 [544/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:51.761 [545/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:51.761 [546/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:51.761 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:51.761 [548/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:51.761 [549/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:52.025 [550/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:52.025 [551/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:52.025 [552/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:52.025 [553/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:52.025 [554/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:52.288 [555/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:52.288 [556/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:52.288 [557/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:52.288 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:52.288 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:52.548 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:52.814 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:53.078 [562/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:53.078 [563/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:53.078 [564/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.078 [565/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:53.078 [566/710] Linking target lib/librte_ethdev.so.24.0 00:01:53.078 [567/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:53.341 [568/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:53.341 [569/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:53.341 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:53.341 [571/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:53.341 [572/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:53.341 [573/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:53.341 [574/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:53.603 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:53.603 [576/710] Linking target lib/librte_metrics.so.24.0 00:01:53.603 [577/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:53.603 [578/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:53.603 [579/710] Linking target lib/librte_bpf.so.24.0 00:01:53.603 [580/710] Linking target lib/librte_eventdev.so.24.0 00:01:53.603 [581/710] Linking target lib/librte_gro.so.24.0 00:01:53.864 [582/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:53.864 [583/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:53.864 [584/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:53.864 [585/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:53.864 [586/710] Linking target lib/librte_gso.so.24.0 00:01:53.864 [587/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:53.864 [588/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:53.864 [589/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:53.864 [590/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:53.864 [591/710] Linking target lib/librte_latencystats.so.24.0 00:01:53.864 [592/710] Linking target lib/librte_bitratestats.so.24.0 00:01:53.864 [593/710] Linking target lib/librte_pcapng.so.24.0 00:01:53.864 [594/710] Linking target lib/librte_ip_frag.so.24.0 00:01:53.864 [595/710] Linking target lib/librte_power.so.24.0 00:01:53.864 [596/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:54.134 [597/710] Linking target lib/librte_dispatcher.so.24.0 00:01:54.134 [598/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:54.134 [599/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:54.134 [600/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:54.134 [601/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:54.134 [602/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:54.134 [603/710] Linking target lib/librte_pdump.so.24.0 00:01:54.134 [604/710] Linking target lib/librte_graph.so.24.0 00:01:54.394 [605/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:54.394 [606/710] Linking target lib/librte_port.so.24.0 00:01:54.394 [607/710] Linking static target lib/librte_pdcp.a 00:01:54.394 [608/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:54.394 [609/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:54.394 [610/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:54.394 [611/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:54.394 [612/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:54.656 [613/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:54.656 [614/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:54.656 [615/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:54.656 [616/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:54.656 [617/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:54.656 [618/710] Linking target lib/librte_table.so.24.0 00:01:54.918 [619/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:54.918 [620/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:54.918 [621/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.918 [622/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:54.918 [623/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:54.918 [624/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:54.918 [625/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:54.918 [626/710] Linking target lib/librte_pdcp.so.24.0 00:01:54.918 [627/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:55.181 [628/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:55.181 [629/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:55.181 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:55.442 [631/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:55.701 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:55.701 [633/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:55.701 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:55.701 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:55.960 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:55.960 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:55.960 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:55.960 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:55.960 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:55.960 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:55.960 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:55.960 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:56.218 [644/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:56.218 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:56.218 [646/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:56.218 [647/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:56.476 [648/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:56.476 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:56.476 [650/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:56.476 [651/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:56.734 [652/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:56.734 [653/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:56.734 [654/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:56.734 [655/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:56.992 [656/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:56.992 [657/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:56.992 [658/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:56.992 [659/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:57.249 [660/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:57.249 [661/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:57.249 [662/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:57.249 [663/710] Linking static target drivers/librte_net_i40e.a 00:01:57.507 [664/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:57.507 [665/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:57.764 [666/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:57.764 [667/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.764 [668/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:58.020 [669/710] Linking target drivers/librte_net_i40e.so.24.0 00:01:58.020 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:58.277 [671/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:58.277 [672/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:58.534 [673/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:58.534 [674/710] Linking static target lib/librte_node.a 00:01:58.791 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.048 [676/710] Linking target lib/librte_node.so.24.0 00:01:59.980 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:00.238 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:00.496 [679/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:01.870 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:02.803 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:09.390 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:41.454 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:41.454 [684/710] Linking static target lib/librte_vhost.a 00:02:41.454 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.454 [686/710] Linking target lib/librte_vhost.so.24.0 00:02:56.331 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:56.331 [688/710] Linking static target lib/librte_pipeline.a 00:02:56.331 [689/710] Linking target app/dpdk-test-cmdline 00:02:56.331 [690/710] Linking target app/dpdk-pdump 00:02:56.331 [691/710] Linking target app/dpdk-dumpcap 00:02:56.331 [692/710] Linking target app/dpdk-test-sad 00:02:56.331 [693/710] Linking target app/dpdk-proc-info 00:02:56.331 [694/710] Linking target app/dpdk-test-dma-perf 00:02:56.331 [695/710] Linking target app/dpdk-test-acl 00:02:56.331 [696/710] Linking target app/dpdk-test-bbdev 00:02:56.331 [697/710] Linking target app/dpdk-test-pipeline 00:02:56.331 [698/710] Linking target app/dpdk-test-gpudev 00:02:56.331 [699/710] Linking target app/dpdk-test-flow-perf 00:02:56.331 [700/710] Linking target app/dpdk-test-fib 00:02:56.331 [701/710] Linking target app/dpdk-test-eventdev 00:02:56.331 [702/710] Linking target app/dpdk-graph 00:02:56.331 [703/710] Linking target app/dpdk-test-compress-perf 00:02:56.331 [704/710] Linking target app/dpdk-test-crypto-perf 00:02:56.331 [705/710] Linking target app/dpdk-test-mldev 00:02:56.331 [706/710] Linking target app/dpdk-test-regex 00:02:56.331 [707/710] Linking target app/dpdk-test-security-perf 00:02:56.331 [708/710] Linking target app/dpdk-testpmd 00:02:57.265 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.265 [710/710] Linking target lib/librte_pipeline.so.24.0 00:02:57.265 01:50:02 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:02:57.265 01:50:02 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:57.265 01:50:02 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:57.523 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:57.523 [0/1] Installing files. 00:02:57.785 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:57.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:57.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:57.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:57.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:57.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:57.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:57.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:57.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:57.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:57.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:57.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:57.789 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.789 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.358 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:58.358 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.359 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:58.359 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.359 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:58.359 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.359 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:58.359 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:58.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:58.362 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:58.362 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:58.362 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:58.362 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:58.363 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:58.363 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:58.363 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:58.363 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:58.363 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:58.363 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:58.363 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:58.363 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:58.363 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:58.363 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:58.363 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:58.363 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:58.363 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:58.363 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:58.363 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:58.363 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:58.363 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:58.363 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:58.363 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:58.363 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:58.363 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:58.363 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:58.363 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:58.363 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:58.363 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:58.363 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:58.363 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:58.363 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:58.363 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:58.363 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:58.363 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:58.363 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:58.363 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:58.363 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:58.363 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:58.363 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:58.363 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:58.363 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:58.363 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:58.363 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:58.363 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:58.363 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:58.363 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:58.363 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:58.363 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:58.363 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:58.363 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:58.363 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:58.363 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:58.363 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:58.363 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:58.363 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:58.363 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:58.363 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:58.363 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:58.363 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:58.363 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:58.363 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:58.363 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:58.363 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:58.363 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:58.363 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:58.363 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:58.363 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:58.363 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:58.363 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:58.363 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:58.363 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:58.363 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:58.363 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:58.363 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:58.363 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:58.622 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:58.622 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:58.622 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:58.622 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:58.622 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:58.622 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:58.622 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:58.622 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:58.622 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:58.622 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:58.622 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:58.622 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:58.622 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:58.622 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:58.622 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:58.622 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:58.622 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:58.622 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:58.622 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:58.622 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:58.622 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:58.622 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:58.622 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:58.622 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:58.622 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:58.622 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:58.622 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:58.622 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:58.622 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:58.622 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:58.622 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:58.622 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:58.622 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:58.622 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:58.622 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:58.622 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:58.622 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:58.622 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:58.622 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:58.622 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:58.622 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:58.622 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:58.622 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:58.622 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:58.622 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:58.622 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:58.622 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:58.622 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:58.622 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:58.622 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:58.622 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:58.622 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:58.622 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:58.622 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:58.622 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:58.622 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:58.622 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:58.622 01:50:04 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:02:58.622 01:50:04 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:58.622 00:02:58.622 real 1m29.803s 00:02:58.622 user 18m2.495s 00:02:58.622 sys 2m6.207s 00:02:58.622 01:50:04 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:58.622 01:50:04 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:58.622 ************************************ 00:02:58.622 END TEST build_native_dpdk 00:02:58.622 ************************************ 00:02:58.622 01:50:04 -- common/autotest_common.sh@1142 -- $ return 0 00:02:58.622 01:50:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:58.622 01:50:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:58.622 01:50:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:58.622 01:50:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:58.622 01:50:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:58.622 01:50:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:58.622 01:50:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:58.623 01:50:04 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:58.623 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:58.623 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.623 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.623 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:59.189 Using 'verbs' RDMA provider 00:03:09.770 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:17.873 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:17.873 Creating mk/config.mk...done. 00:03:17.873 Creating mk/cc.flags.mk...done. 00:03:17.873 Type 'make' to build. 00:03:17.873 01:50:23 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:17.873 01:50:23 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:17.873 01:50:23 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:17.873 01:50:23 -- common/autotest_common.sh@10 -- $ set +x 00:03:17.873 ************************************ 00:03:17.873 START TEST make 00:03:17.873 ************************************ 00:03:17.873 01:50:23 make -- common/autotest_common.sh@1123 -- $ make -j48 00:03:18.131 make[1]: Nothing to be done for 'all'. 00:03:20.055 The Meson build system 00:03:20.055 Version: 1.3.1 00:03:20.055 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:20.055 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:20.055 Build type: native build 00:03:20.055 Project name: libvfio-user 00:03:20.055 Project version: 0.0.1 00:03:20.055 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:20.055 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:20.055 Host machine cpu family: x86_64 00:03:20.055 Host machine cpu: x86_64 00:03:20.055 Run-time dependency threads found: YES 00:03:20.055 Library dl found: YES 00:03:20.055 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:20.055 Run-time dependency json-c found: YES 0.17 00:03:20.055 Run-time dependency cmocka found: YES 1.1.7 00:03:20.055 Program pytest-3 found: NO 00:03:20.055 Program flake8 found: NO 00:03:20.055 Program misspell-fixer found: NO 00:03:20.055 Program restructuredtext-lint found: NO 00:03:20.055 Program valgrind found: YES (/usr/bin/valgrind) 00:03:20.055 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:20.055 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:20.055 Compiler for C supports arguments -Wwrite-strings: YES 00:03:20.055 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:20.055 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:20.055 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:20.055 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:20.055 Build targets in project: 8 00:03:20.055 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:20.055 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:20.055 00:03:20.055 libvfio-user 0.0.1 00:03:20.055 00:03:20.055 User defined options 00:03:20.055 buildtype : debug 00:03:20.055 default_library: shared 00:03:20.055 libdir : /usr/local/lib 00:03:20.055 00:03:20.055 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:20.635 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:20.635 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:20.901 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:20.901 [3/37] Compiling C object samples/null.p/null.c.o 00:03:20.901 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:20.901 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:20.901 [6/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:20.901 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:20.901 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:20.901 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:20.901 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:20.901 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:20.901 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:20.901 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:20.901 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:20.901 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:20.901 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:20.901 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:20.901 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:20.901 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:20.901 [20/37] Compiling C object samples/server.p/server.c.o 00:03:20.901 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:20.901 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:20.901 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:20.901 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:20.901 [25/37] Compiling C object samples/client.p/client.c.o 00:03:20.901 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:21.160 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:21.160 [28/37] Linking target samples/client 00:03:21.160 [29/37] Linking target test/unit_tests 00:03:21.160 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:21.160 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:21.424 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:21.424 [33/37] Linking target samples/lspci 00:03:21.424 [34/37] Linking target samples/server 00:03:21.424 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:21.424 [36/37] Linking target samples/null 00:03:21.424 [37/37] Linking target samples/gpio-pci-idio-16 00:03:21.424 INFO: autodetecting backend as ninja 00:03:21.424 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:21.685 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:22.258 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:22.258 ninja: no work to do. 00:03:34.486 CC lib/ut_mock/mock.o 00:03:34.486 CC lib/log/log.o 00:03:34.486 CC lib/ut/ut.o 00:03:34.486 CC lib/log/log_flags.o 00:03:34.486 CC lib/log/log_deprecated.o 00:03:34.486 LIB libspdk_log.a 00:03:34.486 LIB libspdk_ut.a 00:03:34.486 LIB libspdk_ut_mock.a 00:03:34.486 SO libspdk_ut.so.2.0 00:03:34.486 SO libspdk_ut_mock.so.6.0 00:03:34.486 SO libspdk_log.so.7.0 00:03:34.486 SYMLINK libspdk_ut.so 00:03:34.486 SYMLINK libspdk_ut_mock.so 00:03:34.486 SYMLINK libspdk_log.so 00:03:34.486 CC lib/dma/dma.o 00:03:34.486 CC lib/ioat/ioat.o 00:03:34.486 CXX lib/trace_parser/trace.o 00:03:34.486 CC lib/util/base64.o 00:03:34.486 CC lib/util/bit_array.o 00:03:34.486 CC lib/util/cpuset.o 00:03:34.486 CC lib/util/crc16.o 00:03:34.486 CC lib/util/crc32.o 00:03:34.486 CC lib/util/crc32c.o 00:03:34.486 CC lib/util/crc32_ieee.o 00:03:34.486 CC lib/util/crc64.o 00:03:34.486 CC lib/util/dif.o 00:03:34.486 CC lib/util/fd.o 00:03:34.486 CC lib/util/file.o 00:03:34.486 CC lib/util/hexlify.o 00:03:34.486 CC lib/util/iov.o 00:03:34.486 CC lib/util/math.o 00:03:34.486 CC lib/util/pipe.o 00:03:34.486 CC lib/util/strerror_tls.o 00:03:34.486 CC lib/util/string.o 00:03:34.486 CC lib/util/uuid.o 00:03:34.486 CC lib/util/fd_group.o 00:03:34.486 CC lib/util/xor.o 00:03:34.486 CC lib/util/zipf.o 00:03:34.486 CC lib/vfio_user/host/vfio_user_pci.o 00:03:34.486 CC lib/vfio_user/host/vfio_user.o 00:03:34.486 LIB libspdk_dma.a 00:03:34.486 SO libspdk_dma.so.4.0 00:03:34.486 SYMLINK libspdk_dma.so 00:03:34.486 LIB libspdk_ioat.a 00:03:34.486 SO libspdk_ioat.so.7.0 00:03:34.486 LIB libspdk_vfio_user.a 00:03:34.486 SYMLINK libspdk_ioat.so 00:03:34.486 SO libspdk_vfio_user.so.5.0 00:03:34.486 SYMLINK libspdk_vfio_user.so 00:03:34.486 LIB libspdk_util.a 00:03:34.486 SO libspdk_util.so.9.1 00:03:34.781 SYMLINK libspdk_util.so 00:03:34.781 CC lib/idxd/idxd.o 00:03:34.781 CC lib/conf/conf.o 00:03:34.781 CC lib/rdma_utils/rdma_utils.o 00:03:34.781 CC lib/rdma_provider/common.o 00:03:34.781 CC lib/json/json_parse.o 00:03:34.781 CC lib/env_dpdk/env.o 00:03:34.781 CC lib/idxd/idxd_user.o 00:03:34.781 CC lib/vmd/vmd.o 00:03:34.781 CC lib/json/json_util.o 00:03:34.781 CC lib/env_dpdk/memory.o 00:03:34.781 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:34.781 CC lib/idxd/idxd_kernel.o 00:03:34.781 CC lib/vmd/led.o 00:03:34.781 CC lib/env_dpdk/pci.o 00:03:34.781 CC lib/json/json_write.o 00:03:34.781 CC lib/env_dpdk/init.o 00:03:34.781 CC lib/env_dpdk/threads.o 00:03:34.781 CC lib/env_dpdk/pci_ioat.o 00:03:34.781 CC lib/env_dpdk/pci_virtio.o 00:03:34.781 CC lib/env_dpdk/pci_vmd.o 00:03:34.781 CC lib/env_dpdk/pci_idxd.o 00:03:34.781 CC lib/env_dpdk/pci_event.o 00:03:34.781 CC lib/env_dpdk/sigbus_handler.o 00:03:34.781 CC lib/env_dpdk/pci_dpdk.o 00:03:34.781 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:34.781 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:35.055 LIB libspdk_trace_parser.a 00:03:35.055 SO libspdk_trace_parser.so.5.0 00:03:35.055 LIB libspdk_rdma_provider.a 00:03:35.055 SYMLINK libspdk_trace_parser.so 00:03:35.055 SO libspdk_rdma_provider.so.6.0 00:03:35.055 LIB libspdk_conf.a 00:03:35.055 SO libspdk_conf.so.6.0 00:03:35.313 LIB libspdk_rdma_utils.a 00:03:35.313 SYMLINK libspdk_rdma_provider.so 00:03:35.313 SO libspdk_rdma_utils.so.1.0 00:03:35.313 SYMLINK libspdk_conf.so 00:03:35.313 SYMLINK libspdk_rdma_utils.so 00:03:35.313 LIB libspdk_json.a 00:03:35.313 SO libspdk_json.so.6.0 00:03:35.313 SYMLINK libspdk_json.so 00:03:35.569 LIB libspdk_idxd.a 00:03:35.569 SO libspdk_idxd.so.12.0 00:03:35.569 CC lib/jsonrpc/jsonrpc_server.o 00:03:35.569 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:35.569 CC lib/jsonrpc/jsonrpc_client.o 00:03:35.569 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:35.569 LIB libspdk_vmd.a 00:03:35.569 SYMLINK libspdk_idxd.so 00:03:35.569 SO libspdk_vmd.so.6.0 00:03:35.569 SYMLINK libspdk_vmd.so 00:03:35.826 LIB libspdk_jsonrpc.a 00:03:35.826 SO libspdk_jsonrpc.so.6.0 00:03:35.826 SYMLINK libspdk_jsonrpc.so 00:03:36.084 CC lib/rpc/rpc.o 00:03:36.341 LIB libspdk_rpc.a 00:03:36.341 SO libspdk_rpc.so.6.0 00:03:36.341 SYMLINK libspdk_rpc.so 00:03:36.599 CC lib/notify/notify.o 00:03:36.599 CC lib/trace/trace.o 00:03:36.599 CC lib/keyring/keyring.o 00:03:36.599 CC lib/notify/notify_rpc.o 00:03:36.599 CC lib/keyring/keyring_rpc.o 00:03:36.599 CC lib/trace/trace_flags.o 00:03:36.599 CC lib/trace/trace_rpc.o 00:03:36.599 LIB libspdk_notify.a 00:03:36.599 SO libspdk_notify.so.6.0 00:03:36.857 LIB libspdk_keyring.a 00:03:36.857 SYMLINK libspdk_notify.so 00:03:36.857 LIB libspdk_trace.a 00:03:36.857 SO libspdk_keyring.so.1.0 00:03:36.857 SO libspdk_trace.so.10.0 00:03:36.857 SYMLINK libspdk_keyring.so 00:03:36.857 SYMLINK libspdk_trace.so 00:03:36.857 LIB libspdk_env_dpdk.a 00:03:36.857 SO libspdk_env_dpdk.so.14.1 00:03:37.115 CC lib/sock/sock.o 00:03:37.115 CC lib/sock/sock_rpc.o 00:03:37.115 CC lib/thread/thread.o 00:03:37.115 CC lib/thread/iobuf.o 00:03:37.115 SYMLINK libspdk_env_dpdk.so 00:03:37.373 LIB libspdk_sock.a 00:03:37.373 SO libspdk_sock.so.10.0 00:03:37.373 SYMLINK libspdk_sock.so 00:03:37.631 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:37.631 CC lib/nvme/nvme_ctrlr.o 00:03:37.631 CC lib/nvme/nvme_fabric.o 00:03:37.631 CC lib/nvme/nvme_ns_cmd.o 00:03:37.631 CC lib/nvme/nvme_ns.o 00:03:37.631 CC lib/nvme/nvme_pcie_common.o 00:03:37.631 CC lib/nvme/nvme_pcie.o 00:03:37.631 CC lib/nvme/nvme_qpair.o 00:03:37.631 CC lib/nvme/nvme.o 00:03:37.631 CC lib/nvme/nvme_quirks.o 00:03:37.631 CC lib/nvme/nvme_transport.o 00:03:37.631 CC lib/nvme/nvme_discovery.o 00:03:37.631 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:37.631 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:37.631 CC lib/nvme/nvme_tcp.o 00:03:37.631 CC lib/nvme/nvme_opal.o 00:03:37.631 CC lib/nvme/nvme_io_msg.o 00:03:37.631 CC lib/nvme/nvme_poll_group.o 00:03:37.631 CC lib/nvme/nvme_zns.o 00:03:37.631 CC lib/nvme/nvme_stubs.o 00:03:37.631 CC lib/nvme/nvme_auth.o 00:03:37.631 CC lib/nvme/nvme_cuse.o 00:03:37.631 CC lib/nvme/nvme_vfio_user.o 00:03:37.631 CC lib/nvme/nvme_rdma.o 00:03:38.564 LIB libspdk_thread.a 00:03:38.564 SO libspdk_thread.so.10.1 00:03:38.564 SYMLINK libspdk_thread.so 00:03:38.822 CC lib/virtio/virtio.o 00:03:38.822 CC lib/vfu_tgt/tgt_endpoint.o 00:03:38.822 CC lib/vfu_tgt/tgt_rpc.o 00:03:38.822 CC lib/virtio/virtio_vhost_user.o 00:03:38.822 CC lib/virtio/virtio_vfio_user.o 00:03:38.822 CC lib/accel/accel.o 00:03:38.822 CC lib/accel/accel_rpc.o 00:03:38.822 CC lib/virtio/virtio_pci.o 00:03:38.822 CC lib/blob/blobstore.o 00:03:38.822 CC lib/accel/accel_sw.o 00:03:38.822 CC lib/init/json_config.o 00:03:38.822 CC lib/blob/request.o 00:03:38.822 CC lib/init/subsystem.o 00:03:38.822 CC lib/blob/zeroes.o 00:03:38.822 CC lib/init/subsystem_rpc.o 00:03:38.822 CC lib/blob/blob_bs_dev.o 00:03:38.822 CC lib/init/rpc.o 00:03:39.080 LIB libspdk_init.a 00:03:39.080 SO libspdk_init.so.5.0 00:03:39.080 LIB libspdk_virtio.a 00:03:39.080 LIB libspdk_vfu_tgt.a 00:03:39.338 SYMLINK libspdk_init.so 00:03:39.338 SO libspdk_vfu_tgt.so.3.0 00:03:39.338 SO libspdk_virtio.so.7.0 00:03:39.338 SYMLINK libspdk_vfu_tgt.so 00:03:39.338 SYMLINK libspdk_virtio.so 00:03:39.338 CC lib/event/app.o 00:03:39.338 CC lib/event/reactor.o 00:03:39.338 CC lib/event/log_rpc.o 00:03:39.338 CC lib/event/app_rpc.o 00:03:39.338 CC lib/event/scheduler_static.o 00:03:39.905 LIB libspdk_event.a 00:03:39.905 SO libspdk_event.so.14.0 00:03:39.905 SYMLINK libspdk_event.so 00:03:39.905 LIB libspdk_accel.a 00:03:39.905 SO libspdk_accel.so.15.1 00:03:39.905 SYMLINK libspdk_accel.so 00:03:40.163 LIB libspdk_nvme.a 00:03:40.163 CC lib/bdev/bdev.o 00:03:40.163 CC lib/bdev/bdev_rpc.o 00:03:40.163 CC lib/bdev/bdev_zone.o 00:03:40.163 CC lib/bdev/part.o 00:03:40.163 CC lib/bdev/scsi_nvme.o 00:03:40.163 SO libspdk_nvme.so.13.1 00:03:40.420 SYMLINK libspdk_nvme.so 00:03:41.794 LIB libspdk_blob.a 00:03:41.794 SO libspdk_blob.so.11.0 00:03:42.052 SYMLINK libspdk_blob.so 00:03:42.052 CC lib/lvol/lvol.o 00:03:42.052 CC lib/blobfs/blobfs.o 00:03:42.052 CC lib/blobfs/tree.o 00:03:42.985 LIB libspdk_blobfs.a 00:03:42.985 SO libspdk_blobfs.so.10.0 00:03:42.985 SYMLINK libspdk_blobfs.so 00:03:42.985 LIB libspdk_lvol.a 00:03:42.985 LIB libspdk_bdev.a 00:03:42.985 SO libspdk_lvol.so.10.0 00:03:42.985 SO libspdk_bdev.so.15.1 00:03:42.985 SYMLINK libspdk_lvol.so 00:03:43.250 SYMLINK libspdk_bdev.so 00:03:43.250 CC lib/nbd/nbd.o 00:03:43.250 CC lib/nvmf/ctrlr.o 00:03:43.250 CC lib/ublk/ublk.o 00:03:43.250 CC lib/scsi/dev.o 00:03:43.250 CC lib/nvmf/ctrlr_discovery.o 00:03:43.250 CC lib/scsi/lun.o 00:03:43.250 CC lib/nbd/nbd_rpc.o 00:03:43.250 CC lib/ublk/ublk_rpc.o 00:03:43.250 CC lib/ftl/ftl_core.o 00:03:43.250 CC lib/nvmf/ctrlr_bdev.o 00:03:43.250 CC lib/scsi/port.o 00:03:43.250 CC lib/nvmf/subsystem.o 00:03:43.250 CC lib/ftl/ftl_init.o 00:03:43.250 CC lib/nvmf/nvmf.o 00:03:43.250 CC lib/scsi/scsi.o 00:03:43.250 CC lib/ftl/ftl_layout.o 00:03:43.250 CC lib/scsi/scsi_bdev.o 00:03:43.250 CC lib/nvmf/transport.o 00:03:43.250 CC lib/nvmf/nvmf_rpc.o 00:03:43.250 CC lib/ftl/ftl_debug.o 00:03:43.250 CC lib/scsi/scsi_pr.o 00:03:43.250 CC lib/nvmf/tcp.o 00:03:43.250 CC lib/ftl/ftl_io.o 00:03:43.250 CC lib/scsi/scsi_rpc.o 00:03:43.250 CC lib/nvmf/stubs.o 00:03:43.250 CC lib/scsi/task.o 00:03:43.250 CC lib/nvmf/mdns_server.o 00:03:43.250 CC lib/nvmf/vfio_user.o 00:03:43.250 CC lib/ftl/ftl_sb.o 00:03:43.250 CC lib/ftl/ftl_l2p.o 00:03:43.250 CC lib/nvmf/rdma.o 00:03:43.250 CC lib/nvmf/auth.o 00:03:43.250 CC lib/ftl/ftl_l2p_flat.o 00:03:43.250 CC lib/ftl/ftl_nv_cache.o 00:03:43.250 CC lib/ftl/ftl_band.o 00:03:43.250 CC lib/ftl/ftl_band_ops.o 00:03:43.250 CC lib/ftl/ftl_writer.o 00:03:43.250 CC lib/ftl/ftl_rq.o 00:03:43.250 CC lib/ftl/ftl_reloc.o 00:03:43.250 CC lib/ftl/ftl_l2p_cache.o 00:03:43.250 CC lib/ftl/ftl_p2l.o 00:03:43.250 CC lib/ftl/mngt/ftl_mngt.o 00:03:43.250 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:43.250 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:43.250 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:43.250 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:43.250 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:43.250 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:43.821 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:43.821 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:43.821 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:43.821 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:43.821 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:43.821 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:43.821 CC lib/ftl/utils/ftl_conf.o 00:03:43.821 CC lib/ftl/utils/ftl_mempool.o 00:03:43.821 CC lib/ftl/utils/ftl_md.o 00:03:43.821 CC lib/ftl/utils/ftl_bitmap.o 00:03:43.821 CC lib/ftl/utils/ftl_property.o 00:03:43.821 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:43.821 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:43.821 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:43.821 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:43.821 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:43.821 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:43.821 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:43.821 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:43.821 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:43.821 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:44.081 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:44.081 CC lib/ftl/base/ftl_base_dev.o 00:03:44.081 CC lib/ftl/base/ftl_base_bdev.o 00:03:44.081 CC lib/ftl/ftl_trace.o 00:03:44.081 LIB libspdk_nbd.a 00:03:44.081 SO libspdk_nbd.so.7.0 00:03:44.339 SYMLINK libspdk_nbd.so 00:03:44.339 LIB libspdk_scsi.a 00:03:44.339 SO libspdk_scsi.so.9.0 00:03:44.339 LIB libspdk_ublk.a 00:03:44.339 SYMLINK libspdk_scsi.so 00:03:44.339 SO libspdk_ublk.so.3.0 00:03:44.339 SYMLINK libspdk_ublk.so 00:03:44.597 CC lib/vhost/vhost.o 00:03:44.597 CC lib/iscsi/conn.o 00:03:44.597 CC lib/vhost/vhost_rpc.o 00:03:44.597 CC lib/iscsi/init_grp.o 00:03:44.597 CC lib/vhost/vhost_scsi.o 00:03:44.597 CC lib/iscsi/iscsi.o 00:03:44.597 CC lib/vhost/vhost_blk.o 00:03:44.597 CC lib/iscsi/md5.o 00:03:44.597 CC lib/vhost/rte_vhost_user.o 00:03:44.597 CC lib/iscsi/param.o 00:03:44.597 CC lib/iscsi/portal_grp.o 00:03:44.597 CC lib/iscsi/tgt_node.o 00:03:44.597 CC lib/iscsi/iscsi_subsystem.o 00:03:44.597 CC lib/iscsi/iscsi_rpc.o 00:03:44.597 CC lib/iscsi/task.o 00:03:44.856 LIB libspdk_ftl.a 00:03:44.856 SO libspdk_ftl.so.9.0 00:03:45.423 SYMLINK libspdk_ftl.so 00:03:45.681 LIB libspdk_vhost.a 00:03:45.681 SO libspdk_vhost.so.8.0 00:03:45.939 LIB libspdk_nvmf.a 00:03:45.939 SYMLINK libspdk_vhost.so 00:03:45.939 LIB libspdk_iscsi.a 00:03:45.939 SO libspdk_nvmf.so.18.1 00:03:45.939 SO libspdk_iscsi.so.8.0 00:03:46.197 SYMLINK libspdk_iscsi.so 00:03:46.197 SYMLINK libspdk_nvmf.so 00:03:46.456 CC module/vfu_device/vfu_virtio.o 00:03:46.456 CC module/env_dpdk/env_dpdk_rpc.o 00:03:46.456 CC module/vfu_device/vfu_virtio_blk.o 00:03:46.456 CC module/vfu_device/vfu_virtio_scsi.o 00:03:46.456 CC module/vfu_device/vfu_virtio_rpc.o 00:03:46.456 CC module/keyring/linux/keyring.o 00:03:46.456 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:46.456 CC module/keyring/linux/keyring_rpc.o 00:03:46.456 CC module/blob/bdev/blob_bdev.o 00:03:46.456 CC module/keyring/file/keyring.o 00:03:46.456 CC module/accel/iaa/accel_iaa.o 00:03:46.456 CC module/accel/ioat/accel_ioat.o 00:03:46.456 CC module/keyring/file/keyring_rpc.o 00:03:46.456 CC module/accel/error/accel_error.o 00:03:46.456 CC module/accel/iaa/accel_iaa_rpc.o 00:03:46.456 CC module/accel/ioat/accel_ioat_rpc.o 00:03:46.456 CC module/sock/posix/posix.o 00:03:46.456 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:46.456 CC module/accel/error/accel_error_rpc.o 00:03:46.456 CC module/scheduler/gscheduler/gscheduler.o 00:03:46.456 CC module/accel/dsa/accel_dsa.o 00:03:46.456 CC module/accel/dsa/accel_dsa_rpc.o 00:03:46.715 LIB libspdk_env_dpdk_rpc.a 00:03:46.715 SO libspdk_env_dpdk_rpc.so.6.0 00:03:46.715 SYMLINK libspdk_env_dpdk_rpc.so 00:03:46.715 LIB libspdk_keyring_file.a 00:03:46.715 LIB libspdk_keyring_linux.a 00:03:46.715 LIB libspdk_scheduler_gscheduler.a 00:03:46.715 LIB libspdk_scheduler_dpdk_governor.a 00:03:46.715 SO libspdk_keyring_file.so.1.0 00:03:46.715 SO libspdk_keyring_linux.so.1.0 00:03:46.715 SO libspdk_scheduler_gscheduler.so.4.0 00:03:46.715 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:46.715 LIB libspdk_accel_error.a 00:03:46.715 LIB libspdk_scheduler_dynamic.a 00:03:46.715 LIB libspdk_accel_ioat.a 00:03:46.715 LIB libspdk_accel_iaa.a 00:03:46.715 SO libspdk_scheduler_dynamic.so.4.0 00:03:46.715 SO libspdk_accel_error.so.2.0 00:03:46.715 SO libspdk_accel_ioat.so.6.0 00:03:46.715 SYMLINK libspdk_keyring_file.so 00:03:46.715 SYMLINK libspdk_keyring_linux.so 00:03:46.715 SYMLINK libspdk_scheduler_gscheduler.so 00:03:46.715 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:46.715 SO libspdk_accel_iaa.so.3.0 00:03:46.715 SYMLINK libspdk_scheduler_dynamic.so 00:03:46.715 LIB libspdk_accel_dsa.a 00:03:46.715 SYMLINK libspdk_accel_error.so 00:03:46.715 SYMLINK libspdk_accel_ioat.so 00:03:46.715 LIB libspdk_blob_bdev.a 00:03:46.973 SO libspdk_accel_dsa.so.5.0 00:03:46.973 SYMLINK libspdk_accel_iaa.so 00:03:46.973 SO libspdk_blob_bdev.so.11.0 00:03:46.973 SYMLINK libspdk_accel_dsa.so 00:03:46.973 SYMLINK libspdk_blob_bdev.so 00:03:47.232 LIB libspdk_vfu_device.a 00:03:47.232 SO libspdk_vfu_device.so.3.0 00:03:47.232 CC module/blobfs/bdev/blobfs_bdev.o 00:03:47.232 CC module/bdev/delay/vbdev_delay.o 00:03:47.232 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:47.232 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:47.232 CC module/bdev/lvol/vbdev_lvol.o 00:03:47.232 CC module/bdev/null/bdev_null.o 00:03:47.232 CC module/bdev/malloc/bdev_malloc.o 00:03:47.232 CC module/bdev/error/vbdev_error.o 00:03:47.232 CC module/bdev/gpt/gpt.o 00:03:47.232 CC module/bdev/gpt/vbdev_gpt.o 00:03:47.232 CC module/bdev/null/bdev_null_rpc.o 00:03:47.232 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:47.232 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:47.232 CC module/bdev/error/vbdev_error_rpc.o 00:03:47.232 CC module/bdev/aio/bdev_aio.o 00:03:47.232 CC module/bdev/split/vbdev_split.o 00:03:47.232 CC module/bdev/raid/bdev_raid.o 00:03:47.232 CC module/bdev/raid/bdev_raid_rpc.o 00:03:47.232 CC module/bdev/aio/bdev_aio_rpc.o 00:03:47.232 CC module/bdev/split/vbdev_split_rpc.o 00:03:47.232 CC module/bdev/raid/bdev_raid_sb.o 00:03:47.232 CC module/bdev/nvme/bdev_nvme.o 00:03:47.232 CC module/bdev/passthru/vbdev_passthru.o 00:03:47.232 CC module/bdev/raid/raid0.o 00:03:47.232 CC module/bdev/iscsi/bdev_iscsi.o 00:03:47.232 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:47.232 CC module/bdev/raid/raid1.o 00:03:47.232 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:47.232 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:47.232 CC module/bdev/raid/concat.o 00:03:47.232 CC module/bdev/nvme/nvme_rpc.o 00:03:47.232 CC module/bdev/nvme/bdev_mdns_client.o 00:03:47.232 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:47.232 CC module/bdev/ftl/bdev_ftl.o 00:03:47.232 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:47.232 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:47.232 CC module/bdev/nvme/vbdev_opal.o 00:03:47.232 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:47.232 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:47.232 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:47.232 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:47.232 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:47.232 SYMLINK libspdk_vfu_device.so 00:03:47.490 LIB libspdk_sock_posix.a 00:03:47.490 SO libspdk_sock_posix.so.6.0 00:03:47.490 LIB libspdk_bdev_split.a 00:03:47.490 LIB libspdk_blobfs_bdev.a 00:03:47.490 SO libspdk_bdev_split.so.6.0 00:03:47.490 SO libspdk_blobfs_bdev.so.6.0 00:03:47.490 SYMLINK libspdk_sock_posix.so 00:03:47.753 SYMLINK libspdk_bdev_split.so 00:03:47.753 SYMLINK libspdk_blobfs_bdev.so 00:03:47.753 LIB libspdk_bdev_gpt.a 00:03:47.753 LIB libspdk_bdev_error.a 00:03:47.753 LIB libspdk_bdev_ftl.a 00:03:47.753 LIB libspdk_bdev_passthru.a 00:03:47.753 LIB libspdk_bdev_null.a 00:03:47.753 SO libspdk_bdev_gpt.so.6.0 00:03:47.753 SO libspdk_bdev_error.so.6.0 00:03:47.753 SO libspdk_bdev_ftl.so.6.0 00:03:47.753 SO libspdk_bdev_passthru.so.6.0 00:03:47.753 SO libspdk_bdev_null.so.6.0 00:03:47.753 LIB libspdk_bdev_zone_block.a 00:03:47.753 SYMLINK libspdk_bdev_gpt.so 00:03:47.753 SO libspdk_bdev_zone_block.so.6.0 00:03:47.753 SYMLINK libspdk_bdev_error.so 00:03:47.753 LIB libspdk_bdev_iscsi.a 00:03:47.753 LIB libspdk_bdev_aio.a 00:03:47.753 SYMLINK libspdk_bdev_ftl.so 00:03:47.753 LIB libspdk_bdev_malloc.a 00:03:47.753 SYMLINK libspdk_bdev_null.so 00:03:47.753 SYMLINK libspdk_bdev_passthru.so 00:03:47.753 SO libspdk_bdev_iscsi.so.6.0 00:03:47.753 SO libspdk_bdev_aio.so.6.0 00:03:47.753 SO libspdk_bdev_malloc.so.6.0 00:03:47.753 LIB libspdk_bdev_delay.a 00:03:47.753 SYMLINK libspdk_bdev_zone_block.so 00:03:47.753 SO libspdk_bdev_delay.so.6.0 00:03:47.753 SYMLINK libspdk_bdev_iscsi.so 00:03:47.753 SYMLINK libspdk_bdev_aio.so 00:03:47.753 SYMLINK libspdk_bdev_malloc.so 00:03:48.050 SYMLINK libspdk_bdev_delay.so 00:03:48.050 LIB libspdk_bdev_lvol.a 00:03:48.050 LIB libspdk_bdev_virtio.a 00:03:48.050 SO libspdk_bdev_lvol.so.6.0 00:03:48.050 SO libspdk_bdev_virtio.so.6.0 00:03:48.050 SYMLINK libspdk_bdev_lvol.so 00:03:48.050 SYMLINK libspdk_bdev_virtio.so 00:03:48.308 LIB libspdk_bdev_raid.a 00:03:48.309 SO libspdk_bdev_raid.so.6.0 00:03:48.567 SYMLINK libspdk_bdev_raid.so 00:03:49.501 LIB libspdk_bdev_nvme.a 00:03:49.501 SO libspdk_bdev_nvme.so.7.0 00:03:49.759 SYMLINK libspdk_bdev_nvme.so 00:03:50.016 CC module/event/subsystems/vmd/vmd.o 00:03:50.016 CC module/event/subsystems/iobuf/iobuf.o 00:03:50.016 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:50.016 CC module/event/subsystems/keyring/keyring.o 00:03:50.016 CC module/event/subsystems/scheduler/scheduler.o 00:03:50.016 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:50.016 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:50.016 CC module/event/subsystems/sock/sock.o 00:03:50.016 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:50.016 LIB libspdk_event_keyring.a 00:03:50.273 LIB libspdk_event_scheduler.a 00:03:50.273 LIB libspdk_event_vfu_tgt.a 00:03:50.273 LIB libspdk_event_vhost_blk.a 00:03:50.273 LIB libspdk_event_vmd.a 00:03:50.273 LIB libspdk_event_sock.a 00:03:50.273 LIB libspdk_event_iobuf.a 00:03:50.273 SO libspdk_event_keyring.so.1.0 00:03:50.273 SO libspdk_event_vhost_blk.so.3.0 00:03:50.273 SO libspdk_event_vfu_tgt.so.3.0 00:03:50.273 SO libspdk_event_scheduler.so.4.0 00:03:50.273 SO libspdk_event_vmd.so.6.0 00:03:50.273 SO libspdk_event_sock.so.5.0 00:03:50.273 SO libspdk_event_iobuf.so.3.0 00:03:50.273 SYMLINK libspdk_event_keyring.so 00:03:50.273 SYMLINK libspdk_event_vhost_blk.so 00:03:50.273 SYMLINK libspdk_event_vfu_tgt.so 00:03:50.273 SYMLINK libspdk_event_scheduler.so 00:03:50.273 SYMLINK libspdk_event_sock.so 00:03:50.273 SYMLINK libspdk_event_vmd.so 00:03:50.273 SYMLINK libspdk_event_iobuf.so 00:03:50.530 CC module/event/subsystems/accel/accel.o 00:03:50.530 LIB libspdk_event_accel.a 00:03:50.530 SO libspdk_event_accel.so.6.0 00:03:50.530 SYMLINK libspdk_event_accel.so 00:03:50.788 CC module/event/subsystems/bdev/bdev.o 00:03:51.045 LIB libspdk_event_bdev.a 00:03:51.045 SO libspdk_event_bdev.so.6.0 00:03:51.045 SYMLINK libspdk_event_bdev.so 00:03:51.303 CC module/event/subsystems/ublk/ublk.o 00:03:51.303 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:51.303 CC module/event/subsystems/scsi/scsi.o 00:03:51.303 CC module/event/subsystems/nbd/nbd.o 00:03:51.303 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:51.303 LIB libspdk_event_nbd.a 00:03:51.303 LIB libspdk_event_ublk.a 00:03:51.303 LIB libspdk_event_scsi.a 00:03:51.303 SO libspdk_event_ublk.so.3.0 00:03:51.303 SO libspdk_event_nbd.so.6.0 00:03:51.303 SO libspdk_event_scsi.so.6.0 00:03:51.560 SYMLINK libspdk_event_ublk.so 00:03:51.560 SYMLINK libspdk_event_nbd.so 00:03:51.560 SYMLINK libspdk_event_scsi.so 00:03:51.560 LIB libspdk_event_nvmf.a 00:03:51.560 SO libspdk_event_nvmf.so.6.0 00:03:51.560 SYMLINK libspdk_event_nvmf.so 00:03:51.560 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:51.560 CC module/event/subsystems/iscsi/iscsi.o 00:03:51.818 LIB libspdk_event_vhost_scsi.a 00:03:51.818 SO libspdk_event_vhost_scsi.so.3.0 00:03:51.818 LIB libspdk_event_iscsi.a 00:03:51.818 SO libspdk_event_iscsi.so.6.0 00:03:51.818 SYMLINK libspdk_event_vhost_scsi.so 00:03:51.818 SYMLINK libspdk_event_iscsi.so 00:03:52.075 SO libspdk.so.6.0 00:03:52.075 SYMLINK libspdk.so 00:03:52.075 CC app/trace_record/trace_record.o 00:03:52.075 CXX app/trace/trace.o 00:03:52.075 CC app/spdk_nvme_discover/discovery_aer.o 00:03:52.075 TEST_HEADER include/spdk/accel_module.h 00:03:52.075 CC app/spdk_top/spdk_top.o 00:03:52.075 TEST_HEADER include/spdk/accel.h 00:03:52.075 CC app/spdk_nvme_identify/identify.o 00:03:52.075 TEST_HEADER include/spdk/assert.h 00:03:52.075 TEST_HEADER include/spdk/barrier.h 00:03:52.075 TEST_HEADER include/spdk/bdev.h 00:03:52.075 TEST_HEADER include/spdk/base64.h 00:03:52.075 TEST_HEADER include/spdk/bdev_module.h 00:03:52.075 TEST_HEADER include/spdk/bdev_zone.h 00:03:52.075 CC app/spdk_lspci/spdk_lspci.o 00:03:52.075 CC app/spdk_nvme_perf/perf.o 00:03:52.075 TEST_HEADER include/spdk/bit_array.h 00:03:52.075 TEST_HEADER include/spdk/bit_pool.h 00:03:52.075 CC test/rpc_client/rpc_client_test.o 00:03:52.075 TEST_HEADER include/spdk/blob_bdev.h 00:03:52.075 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:52.075 TEST_HEADER include/spdk/blobfs.h 00:03:52.075 TEST_HEADER include/spdk/blob.h 00:03:52.075 TEST_HEADER include/spdk/conf.h 00:03:52.075 TEST_HEADER include/spdk/config.h 00:03:52.075 TEST_HEADER include/spdk/cpuset.h 00:03:52.075 TEST_HEADER include/spdk/crc16.h 00:03:52.075 TEST_HEADER include/spdk/crc32.h 00:03:52.075 TEST_HEADER include/spdk/crc64.h 00:03:52.075 TEST_HEADER include/spdk/dif.h 00:03:52.075 TEST_HEADER include/spdk/dma.h 00:03:52.075 TEST_HEADER include/spdk/endian.h 00:03:52.075 TEST_HEADER include/spdk/env_dpdk.h 00:03:52.075 TEST_HEADER include/spdk/env.h 00:03:52.075 TEST_HEADER include/spdk/event.h 00:03:52.075 TEST_HEADER include/spdk/fd_group.h 00:03:52.075 TEST_HEADER include/spdk/fd.h 00:03:52.075 TEST_HEADER include/spdk/file.h 00:03:52.075 TEST_HEADER include/spdk/ftl.h 00:03:52.075 TEST_HEADER include/spdk/gpt_spec.h 00:03:52.075 TEST_HEADER include/spdk/hexlify.h 00:03:52.075 TEST_HEADER include/spdk/histogram_data.h 00:03:52.075 TEST_HEADER include/spdk/idxd.h 00:03:52.075 TEST_HEADER include/spdk/idxd_spec.h 00:03:52.075 TEST_HEADER include/spdk/init.h 00:03:52.075 TEST_HEADER include/spdk/ioat.h 00:03:52.075 TEST_HEADER include/spdk/ioat_spec.h 00:03:52.075 TEST_HEADER include/spdk/iscsi_spec.h 00:03:52.341 TEST_HEADER include/spdk/json.h 00:03:52.341 TEST_HEADER include/spdk/keyring.h 00:03:52.341 TEST_HEADER include/spdk/jsonrpc.h 00:03:52.341 TEST_HEADER include/spdk/keyring_module.h 00:03:52.341 TEST_HEADER include/spdk/likely.h 00:03:52.341 TEST_HEADER include/spdk/log.h 00:03:52.341 TEST_HEADER include/spdk/lvol.h 00:03:52.341 TEST_HEADER include/spdk/memory.h 00:03:52.341 TEST_HEADER include/spdk/mmio.h 00:03:52.341 TEST_HEADER include/spdk/nbd.h 00:03:52.341 TEST_HEADER include/spdk/notify.h 00:03:52.341 TEST_HEADER include/spdk/nvme.h 00:03:52.341 TEST_HEADER include/spdk/nvme_intel.h 00:03:52.341 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:52.341 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:52.341 TEST_HEADER include/spdk/nvme_spec.h 00:03:52.341 TEST_HEADER include/spdk/nvme_zns.h 00:03:52.341 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:52.341 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:52.341 TEST_HEADER include/spdk/nvmf.h 00:03:52.341 TEST_HEADER include/spdk/nvmf_spec.h 00:03:52.341 TEST_HEADER include/spdk/nvmf_transport.h 00:03:52.341 TEST_HEADER include/spdk/opal.h 00:03:52.341 TEST_HEADER include/spdk/opal_spec.h 00:03:52.341 TEST_HEADER include/spdk/pci_ids.h 00:03:52.341 TEST_HEADER include/spdk/pipe.h 00:03:52.341 TEST_HEADER include/spdk/queue.h 00:03:52.341 TEST_HEADER include/spdk/reduce.h 00:03:52.341 TEST_HEADER include/spdk/rpc.h 00:03:52.341 TEST_HEADER include/spdk/scsi.h 00:03:52.341 TEST_HEADER include/spdk/scheduler.h 00:03:52.341 TEST_HEADER include/spdk/scsi_spec.h 00:03:52.341 TEST_HEADER include/spdk/sock.h 00:03:52.341 TEST_HEADER include/spdk/stdinc.h 00:03:52.341 TEST_HEADER include/spdk/string.h 00:03:52.341 TEST_HEADER include/spdk/thread.h 00:03:52.341 TEST_HEADER include/spdk/trace.h 00:03:52.341 TEST_HEADER include/spdk/trace_parser.h 00:03:52.341 TEST_HEADER include/spdk/tree.h 00:03:52.341 TEST_HEADER include/spdk/ublk.h 00:03:52.341 TEST_HEADER include/spdk/util.h 00:03:52.341 TEST_HEADER include/spdk/uuid.h 00:03:52.341 TEST_HEADER include/spdk/version.h 00:03:52.341 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:52.341 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:52.341 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:52.341 TEST_HEADER include/spdk/vhost.h 00:03:52.341 TEST_HEADER include/spdk/vmd.h 00:03:52.341 TEST_HEADER include/spdk/xor.h 00:03:52.341 TEST_HEADER include/spdk/zipf.h 00:03:52.341 CXX test/cpp_headers/accel.o 00:03:52.342 CXX test/cpp_headers/accel_module.o 00:03:52.342 CXX test/cpp_headers/assert.o 00:03:52.342 CXX test/cpp_headers/barrier.o 00:03:52.342 CXX test/cpp_headers/base64.o 00:03:52.342 CXX test/cpp_headers/bdev.o 00:03:52.342 CXX test/cpp_headers/bdev_module.o 00:03:52.342 CXX test/cpp_headers/bdev_zone.o 00:03:52.342 CXX test/cpp_headers/bit_array.o 00:03:52.342 CXX test/cpp_headers/bit_pool.o 00:03:52.342 CXX test/cpp_headers/blob_bdev.o 00:03:52.342 CXX test/cpp_headers/blobfs_bdev.o 00:03:52.342 CXX test/cpp_headers/blobfs.o 00:03:52.342 CXX test/cpp_headers/blob.o 00:03:52.342 CXX test/cpp_headers/conf.o 00:03:52.342 CXX test/cpp_headers/config.o 00:03:52.342 CXX test/cpp_headers/cpuset.o 00:03:52.342 CXX test/cpp_headers/crc16.o 00:03:52.342 CC app/iscsi_tgt/iscsi_tgt.o 00:03:52.342 CC app/spdk_dd/spdk_dd.o 00:03:52.342 CC app/nvmf_tgt/nvmf_main.o 00:03:52.342 CXX test/cpp_headers/crc32.o 00:03:52.342 CC test/app/histogram_perf/histogram_perf.o 00:03:52.342 CC examples/ioat/perf/perf.o 00:03:52.342 CC examples/ioat/verify/verify.o 00:03:52.342 CC test/env/vtophys/vtophys.o 00:03:52.342 CC test/app/jsoncat/jsoncat.o 00:03:52.342 CC test/env/memory/memory_ut.o 00:03:52.342 CC test/app/stub/stub.o 00:03:52.342 CC examples/util/zipf/zipf.o 00:03:52.342 CC app/spdk_tgt/spdk_tgt.o 00:03:52.342 CC app/fio/nvme/fio_plugin.o 00:03:52.342 CC test/thread/poller_perf/poller_perf.o 00:03:52.342 CC test/env/pci/pci_ut.o 00:03:52.342 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:52.342 CC test/dma/test_dma/test_dma.o 00:03:52.342 CC app/fio/bdev/fio_plugin.o 00:03:52.342 CC test/app/bdev_svc/bdev_svc.o 00:03:52.601 LINK spdk_lspci 00:03:52.601 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:52.601 CC test/env/mem_callbacks/mem_callbacks.o 00:03:52.601 LINK rpc_client_test 00:03:52.601 LINK spdk_nvme_discover 00:03:52.601 LINK vtophys 00:03:52.601 LINK poller_perf 00:03:52.601 LINK histogram_perf 00:03:52.601 LINK zipf 00:03:52.601 CXX test/cpp_headers/crc64.o 00:03:52.601 CXX test/cpp_headers/dif.o 00:03:52.601 LINK jsoncat 00:03:52.601 LINK interrupt_tgt 00:03:52.601 CXX test/cpp_headers/dma.o 00:03:52.601 CXX test/cpp_headers/endian.o 00:03:52.601 CXX test/cpp_headers/env_dpdk.o 00:03:52.601 LINK nvmf_tgt 00:03:52.601 CXX test/cpp_headers/env.o 00:03:52.601 CXX test/cpp_headers/event.o 00:03:52.601 CXX test/cpp_headers/fd_group.o 00:03:52.601 LINK env_dpdk_post_init 00:03:52.862 CXX test/cpp_headers/fd.o 00:03:52.862 CXX test/cpp_headers/file.o 00:03:52.862 LINK stub 00:03:52.862 CXX test/cpp_headers/ftl.o 00:03:52.862 CXX test/cpp_headers/gpt_spec.o 00:03:52.862 CXX test/cpp_headers/hexlify.o 00:03:52.862 LINK iscsi_tgt 00:03:52.862 LINK spdk_trace_record 00:03:52.862 LINK spdk_tgt 00:03:52.862 LINK ioat_perf 00:03:52.862 CXX test/cpp_headers/histogram_data.o 00:03:52.862 CXX test/cpp_headers/idxd.o 00:03:52.862 LINK verify 00:03:52.862 LINK bdev_svc 00:03:52.862 CXX test/cpp_headers/idxd_spec.o 00:03:52.862 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:52.862 CXX test/cpp_headers/init.o 00:03:52.862 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:52.862 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:52.862 CXX test/cpp_headers/ioat.o 00:03:52.862 CXX test/cpp_headers/ioat_spec.o 00:03:52.862 CXX test/cpp_headers/iscsi_spec.o 00:03:52.862 CXX test/cpp_headers/json.o 00:03:53.124 CXX test/cpp_headers/jsonrpc.o 00:03:53.124 CXX test/cpp_headers/keyring.o 00:03:53.124 LINK spdk_trace 00:03:53.124 CXX test/cpp_headers/keyring_module.o 00:03:53.124 CXX test/cpp_headers/likely.o 00:03:53.124 LINK spdk_dd 00:03:53.124 CXX test/cpp_headers/log.o 00:03:53.124 CXX test/cpp_headers/lvol.o 00:03:53.124 CXX test/cpp_headers/memory.o 00:03:53.124 CXX test/cpp_headers/mmio.o 00:03:53.124 CXX test/cpp_headers/nbd.o 00:03:53.124 CXX test/cpp_headers/notify.o 00:03:53.125 CXX test/cpp_headers/nvme.o 00:03:53.125 LINK pci_ut 00:03:53.125 CXX test/cpp_headers/nvme_intel.o 00:03:53.125 CXX test/cpp_headers/nvme_ocssd.o 00:03:53.125 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:53.125 LINK test_dma 00:03:53.125 CXX test/cpp_headers/nvme_spec.o 00:03:53.125 CXX test/cpp_headers/nvme_zns.o 00:03:53.125 CXX test/cpp_headers/nvmf_cmd.o 00:03:53.125 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:53.125 CXX test/cpp_headers/nvmf.o 00:03:53.125 CXX test/cpp_headers/nvmf_spec.o 00:03:53.125 CXX test/cpp_headers/nvmf_transport.o 00:03:53.125 CXX test/cpp_headers/opal.o 00:03:53.389 CC test/event/event_perf/event_perf.o 00:03:53.389 CC test/event/reactor/reactor.o 00:03:53.389 CC test/event/reactor_perf/reactor_perf.o 00:03:53.389 LINK nvme_fuzz 00:03:53.389 CXX test/cpp_headers/opal_spec.o 00:03:53.389 CXX test/cpp_headers/pci_ids.o 00:03:53.389 CXX test/cpp_headers/pipe.o 00:03:53.389 CC examples/sock/hello_world/hello_sock.o 00:03:53.389 CC test/event/app_repeat/app_repeat.o 00:03:53.389 CC examples/thread/thread/thread_ex.o 00:03:53.389 LINK spdk_bdev 00:03:53.389 CC examples/vmd/lsvmd/lsvmd.o 00:03:53.389 CC examples/idxd/perf/perf.o 00:03:53.389 CXX test/cpp_headers/queue.o 00:03:53.389 CC test/event/scheduler/scheduler.o 00:03:53.389 CXX test/cpp_headers/reduce.o 00:03:53.389 CXX test/cpp_headers/rpc.o 00:03:53.389 CC examples/vmd/led/led.o 00:03:53.389 CXX test/cpp_headers/scheduler.o 00:03:53.389 LINK spdk_nvme 00:03:53.389 CXX test/cpp_headers/scsi.o 00:03:53.389 CXX test/cpp_headers/scsi_spec.o 00:03:53.389 CXX test/cpp_headers/sock.o 00:03:53.389 CXX test/cpp_headers/stdinc.o 00:03:53.389 CXX test/cpp_headers/string.o 00:03:53.389 CXX test/cpp_headers/thread.o 00:03:53.649 CXX test/cpp_headers/trace.o 00:03:53.649 CXX test/cpp_headers/trace_parser.o 00:03:53.649 CXX test/cpp_headers/tree.o 00:03:53.649 CXX test/cpp_headers/ublk.o 00:03:53.649 LINK event_perf 00:03:53.649 LINK reactor 00:03:53.649 CXX test/cpp_headers/util.o 00:03:53.649 CXX test/cpp_headers/uuid.o 00:03:53.649 CXX test/cpp_headers/version.o 00:03:53.649 CXX test/cpp_headers/vfio_user_pci.o 00:03:53.649 LINK reactor_perf 00:03:53.649 CXX test/cpp_headers/vfio_user_spec.o 00:03:53.649 CXX test/cpp_headers/vhost.o 00:03:53.649 CXX test/cpp_headers/vmd.o 00:03:53.649 CXX test/cpp_headers/xor.o 00:03:53.649 CC app/vhost/vhost.o 00:03:53.649 CXX test/cpp_headers/zipf.o 00:03:53.649 LINK vhost_fuzz 00:03:53.649 LINK spdk_nvme_perf 00:03:53.649 LINK lsvmd 00:03:53.649 LINK app_repeat 00:03:53.649 LINK mem_callbacks 00:03:53.649 LINK spdk_nvme_identify 00:03:53.912 LINK led 00:03:53.912 LINK spdk_top 00:03:53.912 LINK hello_sock 00:03:53.912 CC test/nvme/aer/aer.o 00:03:53.912 CC test/nvme/err_injection/err_injection.o 00:03:53.912 CC test/nvme/reset/reset.o 00:03:53.912 CC test/nvme/e2edp/nvme_dp.o 00:03:53.912 LINK scheduler 00:03:53.912 CC test/nvme/sgl/sgl.o 00:03:53.912 CC test/nvme/overhead/overhead.o 00:03:53.912 CC test/nvme/startup/startup.o 00:03:53.912 LINK thread 00:03:53.912 CC test/accel/dif/dif.o 00:03:53.912 CC test/blobfs/mkfs/mkfs.o 00:03:53.912 CC test/nvme/reserve/reserve.o 00:03:53.912 CC test/nvme/simple_copy/simple_copy.o 00:03:53.912 CC test/nvme/connect_stress/connect_stress.o 00:03:53.912 CC test/nvme/boot_partition/boot_partition.o 00:03:53.912 CC test/nvme/compliance/nvme_compliance.o 00:03:53.912 CC test/nvme/fused_ordering/fused_ordering.o 00:03:53.912 CC test/lvol/esnap/esnap.o 00:03:53.912 CC test/nvme/cuse/cuse.o 00:03:53.912 CC test/nvme/fdp/fdp.o 00:03:53.912 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:54.170 LINK idxd_perf 00:03:54.171 LINK vhost 00:03:54.171 LINK err_injection 00:03:54.171 LINK startup 00:03:54.171 LINK boot_partition 00:03:54.171 LINK mkfs 00:03:54.171 LINK connect_stress 00:03:54.171 LINK doorbell_aers 00:03:54.429 LINK overhead 00:03:54.429 LINK aer 00:03:54.429 LINK reserve 00:03:54.429 LINK fused_ordering 00:03:54.429 CC examples/nvme/hello_world/hello_world.o 00:03:54.429 LINK nvme_dp 00:03:54.429 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:54.429 CC examples/nvme/arbitration/arbitration.o 00:03:54.429 CC examples/nvme/abort/abort.o 00:03:54.429 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:54.429 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:54.429 CC examples/nvme/hotplug/hotplug.o 00:03:54.429 CC examples/nvme/reconnect/reconnect.o 00:03:54.429 LINK memory_ut 00:03:54.429 LINK reset 00:03:54.429 LINK sgl 00:03:54.429 CC examples/accel/perf/accel_perf.o 00:03:54.429 LINK nvme_compliance 00:03:54.429 LINK simple_copy 00:03:54.429 CC examples/blob/cli/blobcli.o 00:03:54.429 CC examples/blob/hello_world/hello_blob.o 00:03:54.429 LINK fdp 00:03:54.686 LINK dif 00:03:54.686 LINK pmr_persistence 00:03:54.686 LINK cmb_copy 00:03:54.686 LINK hotplug 00:03:54.686 LINK hello_world 00:03:54.686 LINK hello_blob 00:03:54.686 LINK abort 00:03:54.686 LINK arbitration 00:03:54.944 LINK reconnect 00:03:54.944 LINK nvme_manage 00:03:54.944 LINK accel_perf 00:03:54.944 LINK blobcli 00:03:54.944 CC test/bdev/bdevio/bdevio.o 00:03:55.202 CC examples/bdev/hello_world/hello_bdev.o 00:03:55.202 CC examples/bdev/bdevperf/bdevperf.o 00:03:55.460 LINK iscsi_fuzz 00:03:55.460 LINK bdevio 00:03:55.460 LINK hello_bdev 00:03:55.718 LINK cuse 00:03:55.976 LINK bdevperf 00:03:56.542 CC examples/nvmf/nvmf/nvmf.o 00:03:56.800 LINK nvmf 00:03:59.327 LINK esnap 00:03:59.586 00:03:59.586 real 0m41.505s 00:03:59.586 user 7m24.398s 00:03:59.586 sys 1m49.672s 00:03:59.586 01:51:05 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:59.586 01:51:05 make -- common/autotest_common.sh@10 -- $ set +x 00:03:59.586 ************************************ 00:03:59.586 END TEST make 00:03:59.586 ************************************ 00:03:59.586 01:51:05 -- common/autotest_common.sh@1142 -- $ return 0 00:03:59.586 01:51:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:59.586 01:51:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:59.586 01:51:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:59.586 01:51:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.586 01:51:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:59.586 01:51:05 -- pm/common@44 -- $ pid=1347259 00:03:59.586 01:51:05 -- pm/common@50 -- $ kill -TERM 1347259 00:03:59.586 01:51:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.586 01:51:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:59.586 01:51:05 -- pm/common@44 -- $ pid=1347261 00:03:59.586 01:51:05 -- pm/common@50 -- $ kill -TERM 1347261 00:03:59.586 01:51:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.586 01:51:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:59.586 01:51:05 -- pm/common@44 -- $ pid=1347263 00:03:59.586 01:51:05 -- pm/common@50 -- $ kill -TERM 1347263 00:03:59.586 01:51:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.586 01:51:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:59.586 01:51:05 -- pm/common@44 -- $ pid=1347291 00:03:59.586 01:51:05 -- pm/common@50 -- $ sudo -E kill -TERM 1347291 00:03:59.586 01:51:05 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:59.586 01:51:05 -- nvmf/common.sh@7 -- # uname -s 00:03:59.586 01:51:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:59.586 01:51:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:59.586 01:51:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:59.586 01:51:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:59.586 01:51:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:59.586 01:51:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:59.586 01:51:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:59.586 01:51:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:59.586 01:51:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:59.586 01:51:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:59.586 01:51:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:59.586 01:51:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:59.586 01:51:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:59.586 01:51:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:59.586 01:51:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:59.586 01:51:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:59.586 01:51:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:59.586 01:51:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:59.586 01:51:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:59.586 01:51:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:59.586 01:51:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.586 01:51:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.586 01:51:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.586 01:51:05 -- paths/export.sh@5 -- # export PATH 00:03:59.586 01:51:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.586 01:51:05 -- nvmf/common.sh@47 -- # : 0 00:03:59.586 01:51:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:59.586 01:51:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:59.586 01:51:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:59.586 01:51:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:59.586 01:51:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:59.586 01:51:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:59.586 01:51:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:59.586 01:51:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:59.586 01:51:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:59.586 01:51:05 -- spdk/autotest.sh@32 -- # uname -s 00:03:59.586 01:51:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:59.586 01:51:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:59.586 01:51:05 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:59.586 01:51:05 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:59.586 01:51:05 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:59.586 01:51:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:59.586 01:51:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:59.586 01:51:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:59.586 01:51:05 -- spdk/autotest.sh@48 -- # udevadm_pid=1423635 00:03:59.586 01:51:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:59.586 01:51:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:59.586 01:51:05 -- pm/common@17 -- # local monitor 00:03:59.586 01:51:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.586 01:51:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.586 01:51:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.586 01:51:05 -- pm/common@21 -- # date +%s 00:03:59.586 01:51:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.586 01:51:05 -- pm/common@21 -- # date +%s 00:03:59.586 01:51:05 -- pm/common@25 -- # sleep 1 00:03:59.586 01:51:05 -- pm/common@21 -- # date +%s 00:03:59.586 01:51:05 -- pm/common@21 -- # date +%s 00:03:59.586 01:51:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720914665 00:03:59.586 01:51:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720914665 00:03:59.586 01:51:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720914665 00:03:59.586 01:51:05 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720914665 00:03:59.586 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720914665_collect-vmstat.pm.log 00:03:59.586 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720914665_collect-cpu-load.pm.log 00:03:59.586 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720914665_collect-cpu-temp.pm.log 00:03:59.586 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720914665_collect-bmc-pm.bmc.pm.log 00:04:00.519 01:51:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:00.519 01:51:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:00.519 01:51:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:00.519 01:51:06 -- common/autotest_common.sh@10 -- # set +x 00:04:00.519 01:51:06 -- spdk/autotest.sh@59 -- # create_test_list 00:04:00.519 01:51:06 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:00.519 01:51:06 -- common/autotest_common.sh@10 -- # set +x 00:04:00.775 01:51:06 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:00.775 01:51:06 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.775 01:51:06 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.775 01:51:06 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:00.775 01:51:06 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.775 01:51:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:00.775 01:51:06 -- common/autotest_common.sh@1455 -- # uname 00:04:00.775 01:51:06 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:00.775 01:51:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:00.775 01:51:06 -- common/autotest_common.sh@1475 -- # uname 00:04:00.775 01:51:06 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:00.775 01:51:06 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:00.775 01:51:06 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:00.775 01:51:06 -- spdk/autotest.sh@72 -- # hash lcov 00:04:00.775 01:51:06 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:00.775 01:51:06 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:00.775 --rc lcov_branch_coverage=1 00:04:00.775 --rc lcov_function_coverage=1 00:04:00.775 --rc genhtml_branch_coverage=1 00:04:00.775 --rc genhtml_function_coverage=1 00:04:00.775 --rc genhtml_legend=1 00:04:00.775 --rc geninfo_all_blocks=1 00:04:00.775 ' 00:04:00.775 01:51:06 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:00.775 --rc lcov_branch_coverage=1 00:04:00.775 --rc lcov_function_coverage=1 00:04:00.775 --rc genhtml_branch_coverage=1 00:04:00.775 --rc genhtml_function_coverage=1 00:04:00.775 --rc genhtml_legend=1 00:04:00.775 --rc geninfo_all_blocks=1 00:04:00.775 ' 00:04:00.775 01:51:06 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:00.775 --rc lcov_branch_coverage=1 00:04:00.775 --rc lcov_function_coverage=1 00:04:00.775 --rc genhtml_branch_coverage=1 00:04:00.775 --rc genhtml_function_coverage=1 00:04:00.775 --rc genhtml_legend=1 00:04:00.775 --rc geninfo_all_blocks=1 00:04:00.775 --no-external' 00:04:00.775 01:51:06 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:00.775 --rc lcov_branch_coverage=1 00:04:00.775 --rc lcov_function_coverage=1 00:04:00.775 --rc genhtml_branch_coverage=1 00:04:00.775 --rc genhtml_function_coverage=1 00:04:00.775 --rc genhtml_legend=1 00:04:00.775 --rc geninfo_all_blocks=1 00:04:00.775 --no-external' 00:04:00.775 01:51:06 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:00.775 lcov: LCOV version 1.14 00:04:00.775 01:51:06 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:06.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:06.081 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:06.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:06.081 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:06.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:06.081 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:06.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:06.081 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:06.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:06.081 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:06.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:06.081 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:06.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:06.081 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:06.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:06.081 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:06.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:06.081 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:06.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:06.081 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:06.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:06.081 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:06.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:06.081 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:06.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:06.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:06.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:06.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:06.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:06.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:28.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:28.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:34.860 01:51:39 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:34.860 01:51:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:34.860 01:51:39 -- common/autotest_common.sh@10 -- # set +x 00:04:34.860 01:51:39 -- spdk/autotest.sh@91 -- # rm -f 00:04:34.860 01:51:39 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.119 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:35.119 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:35.119 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:35.119 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:35.119 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:35.119 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:35.119 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:35.119 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:35.119 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:35.119 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:35.376 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:35.376 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:35.376 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:35.376 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:35.376 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:35.376 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:35.376 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:35.376 01:51:40 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:35.376 01:51:40 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:35.376 01:51:40 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:35.376 01:51:40 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:35.376 01:51:41 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:35.376 01:51:41 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:35.376 01:51:41 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:35.376 01:51:41 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.376 01:51:41 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:35.376 01:51:41 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:35.376 01:51:41 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:35.376 01:51:41 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:35.376 01:51:41 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:35.376 01:51:41 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:35.376 01:51:41 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:35.376 No valid GPT data, bailing 00:04:35.376 01:51:41 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:35.376 01:51:41 -- scripts/common.sh@391 -- # pt= 00:04:35.376 01:51:41 -- scripts/common.sh@392 -- # return 1 00:04:35.376 01:51:41 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:35.376 1+0 records in 00:04:35.376 1+0 records out 00:04:35.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00185938 s, 564 MB/s 00:04:35.376 01:51:41 -- spdk/autotest.sh@118 -- # sync 00:04:35.633 01:51:41 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:35.633 01:51:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:35.633 01:51:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:37.533 01:51:42 -- spdk/autotest.sh@124 -- # uname -s 00:04:37.533 01:51:43 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:37.533 01:51:43 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:37.533 01:51:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.533 01:51:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.533 01:51:43 -- common/autotest_common.sh@10 -- # set +x 00:04:37.533 ************************************ 00:04:37.533 START TEST setup.sh 00:04:37.533 ************************************ 00:04:37.533 01:51:43 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:37.533 * Looking for test storage... 00:04:37.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:37.533 01:51:43 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:37.533 01:51:43 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:37.533 01:51:43 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:37.533 01:51:43 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.534 01:51:43 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.534 01:51:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.534 ************************************ 00:04:37.534 START TEST acl 00:04:37.534 ************************************ 00:04:37.534 01:51:43 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:37.534 * Looking for test storage... 00:04:37.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:37.534 01:51:43 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:37.534 01:51:43 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:37.534 01:51:43 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:37.534 01:51:43 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:37.534 01:51:43 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:37.534 01:51:43 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:37.534 01:51:43 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:37.534 01:51:43 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:37.534 01:51:43 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:37.534 01:51:43 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:37.534 01:51:43 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:37.534 01:51:43 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:37.534 01:51:43 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:37.534 01:51:43 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:37.534 01:51:43 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.534 01:51:43 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.437 01:51:44 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:39.437 01:51:44 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:39.437 01:51:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.437 01:51:44 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:39.437 01:51:44 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.437 01:51:44 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:40.373 Hugepages 00:04:40.373 node hugesize free / total 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.373 00:04:40.373 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.373 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:40.374 01:51:45 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:40.374 01:51:45 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.374 01:51:45 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.374 01:51:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:40.374 ************************************ 00:04:40.374 START TEST denied 00:04:40.374 ************************************ 00:04:40.374 01:51:45 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:40.374 01:51:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:40.374 01:51:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:40.374 01:51:45 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:40.374 01:51:45 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.374 01:51:45 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:41.786 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:41.786 01:51:47 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:41.786 01:51:47 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:41.786 01:51:47 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:41.786 01:51:47 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:41.786 01:51:47 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:41.786 01:51:47 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:41.786 01:51:47 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:41.786 01:51:47 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:41.786 01:51:47 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.787 01:51:47 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:44.325 00:04:44.325 real 0m3.916s 00:04:44.325 user 0m1.159s 00:04:44.325 sys 0m1.855s 00:04:44.325 01:51:49 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.325 01:51:49 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:44.325 ************************************ 00:04:44.325 END TEST denied 00:04:44.325 ************************************ 00:04:44.325 01:51:49 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:44.325 01:51:49 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:44.325 01:51:49 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.325 01:51:49 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.325 01:51:49 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:44.325 ************************************ 00:04:44.325 START TEST allowed 00:04:44.325 ************************************ 00:04:44.325 01:51:49 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:44.325 01:51:49 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:44.325 01:51:49 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:44.325 01:51:49 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:44.325 01:51:49 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.325 01:51:49 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.858 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:46.858 01:51:52 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:46.858 01:51:52 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:46.858 01:51:52 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:46.858 01:51:52 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:46.858 01:51:52 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:48.236 00:04:48.236 real 0m3.863s 00:04:48.236 user 0m0.973s 00:04:48.236 sys 0m1.675s 00:04:48.236 01:51:53 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.236 01:51:53 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:48.236 ************************************ 00:04:48.236 END TEST allowed 00:04:48.236 ************************************ 00:04:48.236 01:51:53 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:48.236 00:04:48.236 real 0m10.681s 00:04:48.236 user 0m3.337s 00:04:48.236 sys 0m5.300s 00:04:48.236 01:51:53 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.236 01:51:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:48.236 ************************************ 00:04:48.236 END TEST acl 00:04:48.236 ************************************ 00:04:48.236 01:51:53 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:48.236 01:51:53 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:48.236 01:51:53 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.236 01:51:53 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.236 01:51:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:48.236 ************************************ 00:04:48.236 START TEST hugepages 00:04:48.236 ************************************ 00:04:48.236 01:51:53 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:48.236 * Looking for test storage... 00:04:48.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:48.236 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:48.236 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:48.236 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:48.236 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:48.236 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41123056 kB' 'MemAvailable: 44633244 kB' 'Buffers: 2704 kB' 'Cached: 12824108 kB' 'SwapCached: 0 kB' 'Active: 9828764 kB' 'Inactive: 3506552 kB' 'Active(anon): 9434412 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511860 kB' 'Mapped: 187860 kB' 'Shmem: 8925908 kB' 'KReclaimable: 205816 kB' 'Slab: 583164 kB' 'SReclaimable: 205816 kB' 'SUnreclaim: 377348 kB' 'KernelStack: 12768 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 10558664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.237 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.238 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.239 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:48.498 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:48.498 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.498 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:48.498 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.498 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:48.498 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:48.498 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:48.498 01:51:53 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:48.498 01:51:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.498 01:51:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.498 01:51:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:48.498 ************************************ 00:04:48.498 START TEST default_setup 00:04:48.498 ************************************ 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.498 01:51:53 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:49.430 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:49.689 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:49.689 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:49.689 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:49.689 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:49.689 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:49.689 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:49.689 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:49.689 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:49.689 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:49.689 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:49.689 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:49.689 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:49.689 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:49.689 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:49.689 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:50.626 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43223520 kB' 'MemAvailable: 46733688 kB' 'Buffers: 2704 kB' 'Cached: 12824212 kB' 'SwapCached: 0 kB' 'Active: 9848240 kB' 'Inactive: 3506552 kB' 'Active(anon): 9453888 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531624 kB' 'Mapped: 188064 kB' 'Shmem: 8926012 kB' 'KReclaimable: 205776 kB' 'Slab: 582568 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376792 kB' 'KernelStack: 12736 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10579464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.626 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.627 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43223900 kB' 'MemAvailable: 46734068 kB' 'Buffers: 2704 kB' 'Cached: 12824212 kB' 'SwapCached: 0 kB' 'Active: 9847804 kB' 'Inactive: 3506552 kB' 'Active(anon): 9453452 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530772 kB' 'Mapped: 188024 kB' 'Shmem: 8926012 kB' 'KReclaimable: 205776 kB' 'Slab: 582684 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376908 kB' 'KernelStack: 12768 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10579480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.628 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.629 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43224176 kB' 'MemAvailable: 46734344 kB' 'Buffers: 2704 kB' 'Cached: 12824228 kB' 'SwapCached: 0 kB' 'Active: 9846980 kB' 'Inactive: 3506552 kB' 'Active(anon): 9452628 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529904 kB' 'Mapped: 187948 kB' 'Shmem: 8926028 kB' 'KReclaimable: 205776 kB' 'Slab: 582648 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376872 kB' 'KernelStack: 12752 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10579504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.630 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.890 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.890 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.890 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.890 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.890 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.890 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.891 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:50.892 nr_hugepages=1024 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:50.892 resv_hugepages=0 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:50.892 surplus_hugepages=0 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:50.892 anon_hugepages=0 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43223876 kB' 'MemAvailable: 46734044 kB' 'Buffers: 2704 kB' 'Cached: 12824252 kB' 'SwapCached: 0 kB' 'Active: 9847752 kB' 'Inactive: 3506552 kB' 'Active(anon): 9453400 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530632 kB' 'Mapped: 187948 kB' 'Shmem: 8926052 kB' 'KReclaimable: 205776 kB' 'Slab: 582648 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376872 kB' 'KernelStack: 12800 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10579524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.892 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.893 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25865528 kB' 'MemUsed: 6964356 kB' 'SwapCached: 0 kB' 'Active: 3789548 kB' 'Inactive: 109764 kB' 'Active(anon): 3678660 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3698452 kB' 'Mapped: 36292 kB' 'AnonPages: 204096 kB' 'Shmem: 3477800 kB' 'KernelStack: 6936 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94844 kB' 'Slab: 309496 kB' 'SReclaimable: 94844 kB' 'SUnreclaim: 214652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.894 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.895 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:50.896 node0=1024 expecting 1024 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:50.896 00:04:50.896 real 0m2.426s 00:04:50.896 user 0m0.663s 00:04:50.896 sys 0m0.894s 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.896 01:51:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:50.896 ************************************ 00:04:50.896 END TEST default_setup 00:04:50.896 ************************************ 00:04:50.896 01:51:56 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:50.896 01:51:56 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:50.896 01:51:56 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.896 01:51:56 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.896 01:51:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:50.896 ************************************ 00:04:50.896 START TEST per_node_1G_alloc 00:04:50.896 ************************************ 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.896 01:51:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.275 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:52.275 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:52.275 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:52.275 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:52.275 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:52.275 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:52.275 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:52.275 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:52.275 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.275 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:52.275 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:52.275 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:52.275 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:52.275 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:52.275 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:52.275 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:52.275 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.275 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43204552 kB' 'MemAvailable: 46714720 kB' 'Buffers: 2704 kB' 'Cached: 12824328 kB' 'SwapCached: 0 kB' 'Active: 9847804 kB' 'Inactive: 3506552 kB' 'Active(anon): 9453452 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530636 kB' 'Mapped: 188096 kB' 'Shmem: 8926128 kB' 'KReclaimable: 205776 kB' 'Slab: 582800 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 377024 kB' 'KernelStack: 12848 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10579580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.276 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.277 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43204248 kB' 'MemAvailable: 46714416 kB' 'Buffers: 2704 kB' 'Cached: 12824328 kB' 'SwapCached: 0 kB' 'Active: 9848828 kB' 'Inactive: 3506552 kB' 'Active(anon): 9454476 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531692 kB' 'Mapped: 188968 kB' 'Shmem: 8926128 kB' 'KReclaimable: 205776 kB' 'Slab: 582792 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 377016 kB' 'KernelStack: 12912 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10582164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.278 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.279 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43204248 kB' 'MemAvailable: 46714416 kB' 'Buffers: 2704 kB' 'Cached: 12824348 kB' 'SwapCached: 0 kB' 'Active: 9847660 kB' 'Inactive: 3506552 kB' 'Active(anon): 9453308 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530368 kB' 'Mapped: 187960 kB' 'Shmem: 8926148 kB' 'KReclaimable: 205776 kB' 'Slab: 582780 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 377004 kB' 'KernelStack: 12768 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10579624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.280 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.281 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.282 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.283 nr_hugepages=1024 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.283 resv_hugepages=0 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.283 surplus_hugepages=0 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.283 anon_hugepages=0 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43204264 kB' 'MemAvailable: 46714432 kB' 'Buffers: 2704 kB' 'Cached: 12824352 kB' 'SwapCached: 0 kB' 'Active: 9847768 kB' 'Inactive: 3506552 kB' 'Active(anon): 9453416 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530496 kB' 'Mapped: 187960 kB' 'Shmem: 8926152 kB' 'KReclaimable: 205776 kB' 'Slab: 582780 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 377004 kB' 'KernelStack: 12800 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10579648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.283 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.284 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.285 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26903000 kB' 'MemUsed: 5926884 kB' 'SwapCached: 0 kB' 'Active: 3790056 kB' 'Inactive: 109764 kB' 'Active(anon): 3679168 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3698556 kB' 'Mapped: 36292 kB' 'AnonPages: 204404 kB' 'Shmem: 3477904 kB' 'KernelStack: 6936 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94844 kB' 'Slab: 309508 kB' 'SReclaimable: 94844 kB' 'SUnreclaim: 214664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.286 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.287 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16301924 kB' 'MemUsed: 11409900 kB' 'SwapCached: 0 kB' 'Active: 6058236 kB' 'Inactive: 3396788 kB' 'Active(anon): 5774772 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9128544 kB' 'Mapped: 151668 kB' 'AnonPages: 326564 kB' 'Shmem: 5448292 kB' 'KernelStack: 5880 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110932 kB' 'Slab: 273272 kB' 'SReclaimable: 110932 kB' 'SUnreclaim: 162340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.288 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.289 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:52.290 node0=512 expecting 512 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:52.290 node1=512 expecting 512 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:52.290 00:04:52.290 real 0m1.488s 00:04:52.290 user 0m0.614s 00:04:52.290 sys 0m0.837s 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.290 01:51:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:52.290 ************************************ 00:04:52.290 END TEST per_node_1G_alloc 00:04:52.290 ************************************ 00:04:52.290 01:51:57 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:52.290 01:51:57 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:52.290 01:51:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.290 01:51:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.290 01:51:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.549 ************************************ 00:04:52.549 START TEST even_2G_alloc 00:04:52.549 ************************************ 00:04:52.549 01:51:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:52.549 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:52.549 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:52.549 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.549 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.549 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:52.549 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.549 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.549 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.549 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:52.549 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.549 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.549 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.549 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.550 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:52.550 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.550 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:52.550 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:52.550 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:52.550 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.550 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:52.550 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:52.550 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:52.550 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.550 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:52.550 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:52.550 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:52.550 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.550 01:51:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:53.484 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:53.484 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:53.484 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:53.484 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:53.484 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:53.484 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:53.484 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:53.484 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:53.484 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:53.484 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:53.484 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:53.484 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:53.484 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:53.484 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:53.484 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:53.484 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:53.484 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43208224 kB' 'MemAvailable: 46718392 kB' 'Buffers: 2704 kB' 'Cached: 12824460 kB' 'SwapCached: 0 kB' 'Active: 9848644 kB' 'Inactive: 3506552 kB' 'Active(anon): 9454292 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530812 kB' 'Mapped: 188040 kB' 'Shmem: 8926260 kB' 'KReclaimable: 205776 kB' 'Slab: 582568 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376792 kB' 'KernelStack: 12784 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10579840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.749 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.750 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43208224 kB' 'MemAvailable: 46718392 kB' 'Buffers: 2704 kB' 'Cached: 12824460 kB' 'SwapCached: 0 kB' 'Active: 9845332 kB' 'Inactive: 3506552 kB' 'Active(anon): 9450980 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527864 kB' 'Mapped: 187112 kB' 'Shmem: 8926260 kB' 'KReclaimable: 205776 kB' 'Slab: 582568 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376792 kB' 'KernelStack: 12816 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10565780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.751 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.752 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43209780 kB' 'MemAvailable: 46719948 kB' 'Buffers: 2704 kB' 'Cached: 12824480 kB' 'SwapCached: 0 kB' 'Active: 9844328 kB' 'Inactive: 3506552 kB' 'Active(anon): 9449976 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526872 kB' 'Mapped: 187044 kB' 'Shmem: 8926280 kB' 'KReclaimable: 205776 kB' 'Slab: 582568 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376792 kB' 'KernelStack: 12768 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10565800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.753 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.754 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.755 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:53.756 nr_hugepages=1024 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.756 resv_hugepages=0 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.756 surplus_hugepages=0 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.756 anon_hugepages=0 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43210376 kB' 'MemAvailable: 46720544 kB' 'Buffers: 2704 kB' 'Cached: 12824500 kB' 'SwapCached: 0 kB' 'Active: 9844108 kB' 'Inactive: 3506552 kB' 'Active(anon): 9449756 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526592 kB' 'Mapped: 187044 kB' 'Shmem: 8926300 kB' 'KReclaimable: 205776 kB' 'Slab: 582568 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376792 kB' 'KernelStack: 12736 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10565820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.756 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.757 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26912264 kB' 'MemUsed: 5917620 kB' 'SwapCached: 0 kB' 'Active: 3788508 kB' 'Inactive: 109764 kB' 'Active(anon): 3677620 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3698636 kB' 'Mapped: 35564 kB' 'AnonPages: 202764 kB' 'Shmem: 3477984 kB' 'KernelStack: 6968 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94844 kB' 'Slab: 309360 kB' 'SReclaimable: 94844 kB' 'SUnreclaim: 214516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.758 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.759 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16298112 kB' 'MemUsed: 11413712 kB' 'SwapCached: 0 kB' 'Active: 6055900 kB' 'Inactive: 3396788 kB' 'Active(anon): 5772436 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9128592 kB' 'Mapped: 151480 kB' 'AnonPages: 324108 kB' 'Shmem: 5448340 kB' 'KernelStack: 5800 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110932 kB' 'Slab: 273208 kB' 'SReclaimable: 110932 kB' 'SUnreclaim: 162276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.760 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:53.761 node0=512 expecting 512 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:53.761 node1=512 expecting 512 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:53.761 00:04:53.761 real 0m1.437s 00:04:53.761 user 0m0.621s 00:04:53.761 sys 0m0.772s 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.761 01:51:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:53.761 ************************************ 00:04:53.761 END TEST even_2G_alloc 00:04:53.761 ************************************ 00:04:53.761 01:51:59 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:53.761 01:51:59 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:53.761 01:51:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.761 01:51:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.761 01:51:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:54.021 ************************************ 00:04:54.021 START TEST odd_alloc 00:04:54.021 ************************************ 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.021 01:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.954 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.954 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:54.954 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.954 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.954 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.954 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.954 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.955 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.955 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.955 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.955 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.955 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.955 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.955 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.955 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.955 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.955 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43216020 kB' 'MemAvailable: 46726188 kB' 'Buffers: 2704 kB' 'Cached: 12824596 kB' 'SwapCached: 0 kB' 'Active: 9844824 kB' 'Inactive: 3506552 kB' 'Active(anon): 9450472 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527352 kB' 'Mapped: 187208 kB' 'Shmem: 8926396 kB' 'KReclaimable: 205776 kB' 'Slab: 582472 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376696 kB' 'KernelStack: 13216 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10568556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196868 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.222 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.223 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43217088 kB' 'MemAvailable: 46727256 kB' 'Buffers: 2704 kB' 'Cached: 12824596 kB' 'SwapCached: 0 kB' 'Active: 9844272 kB' 'Inactive: 3506552 kB' 'Active(anon): 9449920 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526780 kB' 'Mapped: 187204 kB' 'Shmem: 8926396 kB' 'KReclaimable: 205776 kB' 'Slab: 582464 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376688 kB' 'KernelStack: 12736 kB' 'PageTables: 7488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10566204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.224 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.225 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43217088 kB' 'MemAvailable: 46727256 kB' 'Buffers: 2704 kB' 'Cached: 12824608 kB' 'SwapCached: 0 kB' 'Active: 9843036 kB' 'Inactive: 3506552 kB' 'Active(anon): 9448684 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525588 kB' 'Mapped: 187204 kB' 'Shmem: 8926408 kB' 'KReclaimable: 205776 kB' 'Slab: 582468 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376692 kB' 'KernelStack: 12688 kB' 'PageTables: 7572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10566228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.226 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.227 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:55.228 nr_hugepages=1025 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.228 resv_hugepages=0 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.228 surplus_hugepages=0 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.228 anon_hugepages=0 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43216840 kB' 'MemAvailable: 46727008 kB' 'Buffers: 2704 kB' 'Cached: 12824636 kB' 'SwapCached: 0 kB' 'Active: 9842764 kB' 'Inactive: 3506552 kB' 'Active(anon): 9448412 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525220 kB' 'Mapped: 187112 kB' 'Shmem: 8926436 kB' 'KReclaimable: 205776 kB' 'Slab: 582460 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376684 kB' 'KernelStack: 12704 kB' 'PageTables: 7592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10566248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.228 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:55.229 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26914556 kB' 'MemUsed: 5915328 kB' 'SwapCached: 0 kB' 'Active: 3787464 kB' 'Inactive: 109764 kB' 'Active(anon): 3676576 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3698760 kB' 'Mapped: 35564 kB' 'AnonPages: 201696 kB' 'Shmem: 3478108 kB' 'KernelStack: 6952 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94844 kB' 'Slab: 309484 kB' 'SReclaimable: 94844 kB' 'SUnreclaim: 214640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.230 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16304584 kB' 'MemUsed: 11407240 kB' 'SwapCached: 0 kB' 'Active: 6055652 kB' 'Inactive: 3396788 kB' 'Active(anon): 5772188 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9128604 kB' 'Mapped: 151548 kB' 'AnonPages: 323976 kB' 'Shmem: 5448352 kB' 'KernelStack: 5800 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110932 kB' 'Slab: 272960 kB' 'SReclaimable: 110932 kB' 'SUnreclaim: 162028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.231 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.232 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:55.233 node0=512 expecting 513 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:55.233 node1=513 expecting 512 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:55.233 00:04:55.233 real 0m1.433s 00:04:55.233 user 0m0.587s 00:04:55.233 sys 0m0.807s 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.233 01:52:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:55.233 ************************************ 00:04:55.233 END TEST odd_alloc 00:04:55.233 ************************************ 00:04:55.233 01:52:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:55.233 01:52:00 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:55.233 01:52:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.514 01:52:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.514 01:52:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:55.514 ************************************ 00:04:55.514 START TEST custom_alloc 00:04:55.514 ************************************ 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:55.514 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.515 01:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.450 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.450 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:56.450 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.450 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.450 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.450 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.450 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.450 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.450 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.450 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.450 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.450 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.450 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.450 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.450 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.450 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.450 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42158604 kB' 'MemAvailable: 45668772 kB' 'Buffers: 2704 kB' 'Cached: 12824728 kB' 'SwapCached: 0 kB' 'Active: 9843388 kB' 'Inactive: 3506552 kB' 'Active(anon): 9449036 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525720 kB' 'Mapped: 187128 kB' 'Shmem: 8926528 kB' 'KReclaimable: 205776 kB' 'Slab: 582392 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376616 kB' 'KernelStack: 12752 kB' 'PageTables: 7700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10566448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42161404 kB' 'MemAvailable: 45671572 kB' 'Buffers: 2704 kB' 'Cached: 12824728 kB' 'SwapCached: 0 kB' 'Active: 9843412 kB' 'Inactive: 3506552 kB' 'Active(anon): 9449060 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525736 kB' 'Mapped: 187128 kB' 'Shmem: 8926528 kB' 'KReclaimable: 205776 kB' 'Slab: 582380 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376604 kB' 'KernelStack: 12784 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10566464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.717 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42161460 kB' 'MemAvailable: 45671628 kB' 'Buffers: 2704 kB' 'Cached: 12824744 kB' 'SwapCached: 0 kB' 'Active: 9843336 kB' 'Inactive: 3506552 kB' 'Active(anon): 9448984 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525644 kB' 'Mapped: 187128 kB' 'Shmem: 8926544 kB' 'KReclaimable: 205776 kB' 'Slab: 582428 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376652 kB' 'KernelStack: 12752 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10566488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:56.719 nr_hugepages=1536 00:04:56.719 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.720 resv_hugepages=0 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.720 surplus_hugepages=0 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.720 anon_hugepages=0 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42161460 kB' 'MemAvailable: 45671628 kB' 'Buffers: 2704 kB' 'Cached: 12824768 kB' 'SwapCached: 0 kB' 'Active: 9843076 kB' 'Inactive: 3506552 kB' 'Active(anon): 9448724 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525396 kB' 'Mapped: 187128 kB' 'Shmem: 8926568 kB' 'KReclaimable: 205776 kB' 'Slab: 582420 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376644 kB' 'KernelStack: 12768 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10566508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.720 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.721 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26896540 kB' 'MemUsed: 5933344 kB' 'SwapCached: 0 kB' 'Active: 3787624 kB' 'Inactive: 109764 kB' 'Active(anon): 3676736 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3698888 kB' 'Mapped: 35564 kB' 'AnonPages: 201668 kB' 'Shmem: 3478236 kB' 'KernelStack: 6952 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94844 kB' 'Slab: 309452 kB' 'SReclaimable: 94844 kB' 'SUnreclaim: 214608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.722 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15264920 kB' 'MemUsed: 12446904 kB' 'SwapCached: 0 kB' 'Active: 6055508 kB' 'Inactive: 3396788 kB' 'Active(anon): 5772044 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9128608 kB' 'Mapped: 151564 kB' 'AnonPages: 323728 kB' 'Shmem: 5448356 kB' 'KernelStack: 5816 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110932 kB' 'Slab: 272968 kB' 'SReclaimable: 110932 kB' 'SUnreclaim: 162036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.723 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:56.724 node0=512 expecting 512 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:56.724 node1=1024 expecting 1024 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:56.724 00:04:56.724 real 0m1.466s 00:04:56.724 user 0m0.640s 00:04:56.724 sys 0m0.791s 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.724 01:52:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:56.724 ************************************ 00:04:56.724 END TEST custom_alloc 00:04:56.724 ************************************ 00:04:56.983 01:52:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:56.983 01:52:02 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:56.983 01:52:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.983 01:52:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.983 01:52:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:56.983 ************************************ 00:04:56.983 START TEST no_shrink_alloc 00:04:56.983 ************************************ 00:04:56.983 01:52:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:56.983 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:56.983 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.983 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:56.983 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:56.983 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:56.983 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:56.983 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.983 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.983 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:56.983 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:56.983 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.983 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.983 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.983 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.984 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.984 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:56.984 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:56.984 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:56.984 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:56.984 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:56.984 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.984 01:52:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.913 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:57.913 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:57.913 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:57.913 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:57.913 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:57.913 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:57.913 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:57.913 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:57.913 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:57.913 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:57.913 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:57.913 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:57.913 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:57.913 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:57.913 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:57.913 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:57.913 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43206924 kB' 'MemAvailable: 46717092 kB' 'Buffers: 2704 kB' 'Cached: 12824848 kB' 'SwapCached: 0 kB' 'Active: 9843152 kB' 'Inactive: 3506552 kB' 'Active(anon): 9448800 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525800 kB' 'Mapped: 187200 kB' 'Shmem: 8926648 kB' 'KReclaimable: 205776 kB' 'Slab: 582268 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376492 kB' 'KernelStack: 12752 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10566776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196692 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.175 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.176 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43209360 kB' 'MemAvailable: 46719528 kB' 'Buffers: 2704 kB' 'Cached: 12824856 kB' 'SwapCached: 0 kB' 'Active: 9844400 kB' 'Inactive: 3506552 kB' 'Active(anon): 9450048 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526692 kB' 'Mapped: 187276 kB' 'Shmem: 8926656 kB' 'KReclaimable: 205776 kB' 'Slab: 582324 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376548 kB' 'KernelStack: 12800 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10566428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.177 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.178 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43208680 kB' 'MemAvailable: 46718848 kB' 'Buffers: 2704 kB' 'Cached: 12824876 kB' 'SwapCached: 0 kB' 'Active: 9843196 kB' 'Inactive: 3506552 kB' 'Active(anon): 9448844 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525384 kB' 'Mapped: 187140 kB' 'Shmem: 8926676 kB' 'KReclaimable: 205776 kB' 'Slab: 582300 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376524 kB' 'KernelStack: 12704 kB' 'PageTables: 7552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10566452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.179 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.180 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:58.181 nr_hugepages=1024 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.181 resv_hugepages=0 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.181 surplus_hugepages=0 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.181 anon_hugepages=0 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43208892 kB' 'MemAvailable: 46719060 kB' 'Buffers: 2704 kB' 'Cached: 12824892 kB' 'SwapCached: 0 kB' 'Active: 9843180 kB' 'Inactive: 3506552 kB' 'Active(anon): 9448828 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525376 kB' 'Mapped: 187140 kB' 'Shmem: 8926692 kB' 'KReclaimable: 205776 kB' 'Slab: 582300 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376524 kB' 'KernelStack: 12752 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10566600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.181 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.182 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.183 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.442 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25851796 kB' 'MemUsed: 6978088 kB' 'SwapCached: 0 kB' 'Active: 3787220 kB' 'Inactive: 109764 kB' 'Active(anon): 3676332 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3698944 kB' 'Mapped: 35564 kB' 'AnonPages: 201236 kB' 'Shmem: 3478292 kB' 'KernelStack: 6936 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94844 kB' 'Slab: 309488 kB' 'SReclaimable: 94844 kB' 'SUnreclaim: 214644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.443 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:58.444 node0=1024 expecting 1024 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.444 01:52:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.380 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.380 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:59.380 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.380 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.380 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.380 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.380 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.380 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.380 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.380 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.380 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.380 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.380 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.380 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.380 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.380 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.380 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.647 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.647 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43215256 kB' 'MemAvailable: 46725424 kB' 'Buffers: 2704 kB' 'Cached: 12824964 kB' 'SwapCached: 0 kB' 'Active: 9843556 kB' 'Inactive: 3506552 kB' 'Active(anon): 9449204 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525772 kB' 'Mapped: 187288 kB' 'Shmem: 8926764 kB' 'KReclaimable: 205776 kB' 'Slab: 582192 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376416 kB' 'KernelStack: 12784 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10569384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.648 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43216824 kB' 'MemAvailable: 46726992 kB' 'Buffers: 2704 kB' 'Cached: 12824968 kB' 'SwapCached: 0 kB' 'Active: 9844532 kB' 'Inactive: 3506552 kB' 'Active(anon): 9450180 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526268 kB' 'Mapped: 187168 kB' 'Shmem: 8926768 kB' 'KReclaimable: 205776 kB' 'Slab: 582192 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376416 kB' 'KernelStack: 12960 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10568036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196804 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.649 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.650 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43220112 kB' 'MemAvailable: 46730280 kB' 'Buffers: 2704 kB' 'Cached: 12824984 kB' 'SwapCached: 0 kB' 'Active: 9844668 kB' 'Inactive: 3506552 kB' 'Active(anon): 9450316 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526812 kB' 'Mapped: 187168 kB' 'Shmem: 8926784 kB' 'KReclaimable: 205776 kB' 'Slab: 582192 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376416 kB' 'KernelStack: 13120 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10568056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196836 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.651 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.652 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:59.653 nr_hugepages=1024 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.653 resv_hugepages=0 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.653 surplus_hugepages=0 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.653 anon_hugepages=0 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.653 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43218032 kB' 'MemAvailable: 46728200 kB' 'Buffers: 2704 kB' 'Cached: 12825008 kB' 'SwapCached: 0 kB' 'Active: 9845736 kB' 'Inactive: 3506552 kB' 'Active(anon): 9451384 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527872 kB' 'Mapped: 187168 kB' 'Shmem: 8926808 kB' 'KReclaimable: 205776 kB' 'Slab: 582192 kB' 'SReclaimable: 205776 kB' 'SUnreclaim: 376416 kB' 'KernelStack: 13200 kB' 'PageTables: 9548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10568080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196852 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.654 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.655 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25863696 kB' 'MemUsed: 6966188 kB' 'SwapCached: 0 kB' 'Active: 3788820 kB' 'Inactive: 109764 kB' 'Active(anon): 3677932 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3698944 kB' 'Mapped: 35564 kB' 'AnonPages: 202744 kB' 'Shmem: 3478292 kB' 'KernelStack: 7112 kB' 'PageTables: 5056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94844 kB' 'Slab: 309556 kB' 'SReclaimable: 94844 kB' 'SUnreclaim: 214712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.656 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:59.657 node0=1024 expecting 1024 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:59.657 00:04:59.657 real 0m2.852s 00:04:59.657 user 0m1.142s 00:04:59.657 sys 0m1.632s 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.657 01:52:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:59.657 ************************************ 00:04:59.657 END TEST no_shrink_alloc 00:04:59.657 ************************************ 00:04:59.657 01:52:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:59.657 01:52:05 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:59.657 01:52:05 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:59.657 01:52:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:59.657 01:52:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.657 01:52:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.657 01:52:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.657 01:52:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.657 01:52:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:59.657 01:52:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.657 01:52:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.657 01:52:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.657 01:52:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.657 01:52:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:59.657 01:52:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:59.917 00:04:59.917 real 0m11.494s 00:04:59.917 user 0m4.441s 00:04:59.917 sys 0m5.972s 00:04:59.917 01:52:05 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.917 01:52:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:59.917 ************************************ 00:04:59.917 END TEST hugepages 00:04:59.917 ************************************ 00:04:59.917 01:52:05 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:59.917 01:52:05 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:59.917 01:52:05 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.917 01:52:05 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.917 01:52:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:59.917 ************************************ 00:04:59.917 START TEST driver 00:04:59.917 ************************************ 00:04:59.917 01:52:05 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:59.917 * Looking for test storage... 00:04:59.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:59.917 01:52:05 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:59.917 01:52:05 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.917 01:52:05 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.449 01:52:07 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:02.449 01:52:07 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.449 01:52:07 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.449 01:52:07 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:02.449 ************************************ 00:05:02.449 START TEST guess_driver 00:05:02.449 ************************************ 00:05:02.449 01:52:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:02.449 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:02.449 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:02.449 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:02.449 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:02.449 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:02.449 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:02.449 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:02.449 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:02.449 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:02.449 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:02.449 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:02.449 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:02.449 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:02.449 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:02.449 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:02.449 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:02.449 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:02.449 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:02.449 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:02.449 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:02.450 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:02.450 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:02.450 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:02.450 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:02.450 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:02.450 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:02.450 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:02.450 Looking for driver=vfio-pci 00:05:02.450 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.450 01:52:07 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:02.450 01:52:07 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.450 01:52:07 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.828 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.829 01:52:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.766 01:52:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.766 01:52:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.766 01:52:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.766 01:52:10 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:04.766 01:52:10 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:04.766 01:52:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:04.766 01:52:10 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:07.295 00:05:07.295 real 0m4.871s 00:05:07.295 user 0m1.087s 00:05:07.295 sys 0m1.906s 00:05:07.295 01:52:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.295 01:52:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:07.295 ************************************ 00:05:07.295 END TEST guess_driver 00:05:07.295 ************************************ 00:05:07.295 01:52:12 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:07.295 00:05:07.295 real 0m7.428s 00:05:07.295 user 0m1.663s 00:05:07.295 sys 0m2.911s 00:05:07.295 01:52:12 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.295 01:52:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:07.295 ************************************ 00:05:07.295 END TEST driver 00:05:07.295 ************************************ 00:05:07.295 01:52:12 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:07.295 01:52:12 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:07.295 01:52:12 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.295 01:52:12 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.295 01:52:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:07.295 ************************************ 00:05:07.295 START TEST devices 00:05:07.295 ************************************ 00:05:07.295 01:52:12 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:07.295 * Looking for test storage... 00:05:07.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:07.295 01:52:12 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:07.295 01:52:12 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:07.295 01:52:12 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.295 01:52:12 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:08.665 01:52:14 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:08.665 01:52:14 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:08.665 01:52:14 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:08.665 01:52:14 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:08.665 01:52:14 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:08.665 01:52:14 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:08.665 01:52:14 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:08.665 01:52:14 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:08.665 01:52:14 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:08.665 01:52:14 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:08.665 No valid GPT data, bailing 00:05:08.665 01:52:14 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:08.665 01:52:14 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:08.665 01:52:14 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:08.665 01:52:14 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:08.665 01:52:14 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:08.665 01:52:14 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:08.665 01:52:14 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:08.665 01:52:14 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.665 01:52:14 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.665 01:52:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:08.921 ************************************ 00:05:08.921 START TEST nvme_mount 00:05:08.921 ************************************ 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:08.921 01:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:09.849 Creating new GPT entries in memory. 00:05:09.849 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:09.849 other utilities. 00:05:09.849 01:52:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:09.849 01:52:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:09.849 01:52:15 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:09.849 01:52:15 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:09.849 01:52:15 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:10.810 Creating new GPT entries in memory. 00:05:10.810 The operation has completed successfully. 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1444354 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.810 01:52:16 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:12.181 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:12.181 01:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:12.439 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:12.439 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:12.439 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:12.439 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:12.439 01:52:18 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:12.439 01:52:18 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:12.439 01:52:18 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.440 01:52:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.813 01:52:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.188 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:15.189 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.189 00:05:15.189 real 0m6.351s 00:05:15.189 user 0m1.455s 00:05:15.189 sys 0m2.454s 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.189 01:52:20 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:15.189 ************************************ 00:05:15.189 END TEST nvme_mount 00:05:15.189 ************************************ 00:05:15.189 01:52:20 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:15.189 01:52:20 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:15.189 01:52:20 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.189 01:52:20 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.189 01:52:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:15.189 ************************************ 00:05:15.189 START TEST dm_mount 00:05:15.189 ************************************ 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:15.189 01:52:20 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:16.123 Creating new GPT entries in memory. 00:05:16.123 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:16.123 other utilities. 00:05:16.123 01:52:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:16.123 01:52:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.123 01:52:21 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:16.123 01:52:21 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:16.123 01:52:21 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:17.506 Creating new GPT entries in memory. 00:05:17.506 The operation has completed successfully. 00:05:17.506 01:52:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:17.506 01:52:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.506 01:52:22 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:17.506 01:52:22 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:17.506 01:52:22 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:18.439 The operation has completed successfully. 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1446738 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:18.439 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.440 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:18.440 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:18.440 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:18.440 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:18.440 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:18.440 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.440 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:18.440 01:52:23 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:18.440 01:52:23 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.440 01:52:23 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.376 01:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.634 01:52:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.570 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.571 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.571 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.571 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.571 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.571 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.571 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.571 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.571 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.571 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.571 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.829 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:20.829 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:20.829 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:20.829 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:20.829 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:20.829 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:20.829 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:20.829 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:20.829 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:20.830 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:20.830 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:20.830 01:52:26 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:20.830 00:05:20.830 real 0m5.696s 00:05:20.830 user 0m0.931s 00:05:20.830 sys 0m1.617s 00:05:20.830 01:52:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.830 01:52:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:20.830 ************************************ 00:05:20.830 END TEST dm_mount 00:05:20.830 ************************************ 00:05:20.830 01:52:26 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:20.830 01:52:26 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:20.830 01:52:26 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:20.830 01:52:26 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:20.830 01:52:26 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:20.830 01:52:26 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:20.830 01:52:26 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:20.830 01:52:26 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:21.088 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:21.088 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:21.088 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:21.088 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:21.088 01:52:26 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:21.088 01:52:26 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.088 01:52:26 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:21.088 01:52:26 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:21.088 01:52:26 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:21.088 01:52:26 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:21.088 01:52:26 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:21.088 00:05:21.088 real 0m13.898s 00:05:21.088 user 0m3.001s 00:05:21.088 sys 0m5.078s 00:05:21.088 01:52:26 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.088 01:52:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:21.088 ************************************ 00:05:21.088 END TEST devices 00:05:21.088 ************************************ 00:05:21.088 01:52:26 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:21.088 00:05:21.088 real 0m43.739s 00:05:21.088 user 0m12.536s 00:05:21.088 sys 0m19.422s 00:05:21.088 01:52:26 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.088 01:52:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:21.088 ************************************ 00:05:21.088 END TEST setup.sh 00:05:21.088 ************************************ 00:05:21.346 01:52:26 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.346 01:52:26 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:22.282 Hugepages 00:05:22.282 node hugesize free / total 00:05:22.282 node0 1048576kB 0 / 0 00:05:22.282 node0 2048kB 2048 / 2048 00:05:22.282 node1 1048576kB 0 / 0 00:05:22.282 node1 2048kB 0 / 0 00:05:22.282 00:05:22.282 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:22.282 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:22.282 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:22.282 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:22.282 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:22.282 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:22.282 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:22.282 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:22.282 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:22.282 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:22.282 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:22.282 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:22.282 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:22.282 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:22.282 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:22.282 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:22.282 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:22.282 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:22.282 01:52:27 -- spdk/autotest.sh@130 -- # uname -s 00:05:22.282 01:52:27 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:22.282 01:52:27 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:22.282 01:52:27 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:23.657 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:23.657 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:23.657 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:23.657 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:23.657 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:23.657 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:23.657 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:23.657 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:23.657 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:23.657 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:23.657 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:23.657 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:23.658 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:23.658 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:23.658 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:23.658 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:24.592 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:24.851 01:52:30 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:25.787 01:52:31 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:25.787 01:52:31 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:25.787 01:52:31 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:25.787 01:52:31 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:25.787 01:52:31 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:25.787 01:52:31 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:25.787 01:52:31 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.787 01:52:31 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:25.787 01:52:31 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:25.787 01:52:31 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:25.787 01:52:31 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:25.787 01:52:31 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:26.775 Waiting for block devices as requested 00:05:27.034 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:27.034 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:27.292 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:27.292 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:27.292 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:27.292 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:27.550 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:27.550 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:27.550 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:27.550 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:27.809 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:27.809 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:27.809 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:28.068 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:28.068 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:28.068 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:28.068 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:28.327 01:52:33 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:28.327 01:52:33 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:28.327 01:52:33 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:28.327 01:52:33 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:05:28.327 01:52:33 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:28.327 01:52:33 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:28.327 01:52:33 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:28.327 01:52:33 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:28.327 01:52:33 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:28.327 01:52:33 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:28.327 01:52:33 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:28.327 01:52:33 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:28.327 01:52:33 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:28.327 01:52:33 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:28.327 01:52:33 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:28.327 01:52:33 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:28.327 01:52:33 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:28.327 01:52:33 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:28.327 01:52:33 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:28.327 01:52:33 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:28.327 01:52:33 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:28.327 01:52:33 -- common/autotest_common.sh@1557 -- # continue 00:05:28.327 01:52:33 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:28.327 01:52:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.327 01:52:33 -- common/autotest_common.sh@10 -- # set +x 00:05:28.327 01:52:33 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:28.327 01:52:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.327 01:52:33 -- common/autotest_common.sh@10 -- # set +x 00:05:28.327 01:52:33 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:29.703 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:29.703 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:29.703 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:29.703 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:29.703 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:29.703 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:29.703 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:29.703 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:29.703 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:29.703 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:29.703 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:29.703 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:29.703 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:29.703 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:29.703 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:29.703 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:30.639 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:30.639 01:52:36 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:30.639 01:52:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.639 01:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:30.639 01:52:36 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:30.639 01:52:36 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:30.639 01:52:36 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:30.639 01:52:36 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:30.639 01:52:36 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:30.639 01:52:36 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:30.639 01:52:36 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:30.639 01:52:36 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:30.639 01:52:36 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:30.639 01:52:36 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:30.639 01:52:36 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:30.639 01:52:36 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:30.639 01:52:36 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:30.639 01:52:36 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:30.639 01:52:36 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:30.639 01:52:36 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:30.639 01:52:36 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:30.639 01:52:36 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:30.639 01:52:36 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:05:30.639 01:52:36 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:05:30.639 01:52:36 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1451918 00:05:30.639 01:52:36 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.639 01:52:36 -- common/autotest_common.sh@1598 -- # waitforlisten 1451918 00:05:30.639 01:52:36 -- common/autotest_common.sh@829 -- # '[' -z 1451918 ']' 00:05:30.639 01:52:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.639 01:52:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.639 01:52:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.639 01:52:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.639 01:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:30.639 [2024-07-14 01:52:36.304705] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:30.639 [2024-07-14 01:52:36.304791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1451918 ] 00:05:30.898 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.898 [2024-07-14 01:52:36.364689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.898 [2024-07-14 01:52:36.452205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.156 01:52:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.156 01:52:36 -- common/autotest_common.sh@862 -- # return 0 00:05:31.156 01:52:36 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:31.156 01:52:36 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:31.156 01:52:36 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:34.437 nvme0n1 00:05:34.437 01:52:39 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:34.437 [2024-07-14 01:52:40.011359] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:34.437 [2024-07-14 01:52:40.011410] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:34.437 request: 00:05:34.437 { 00:05:34.437 "nvme_ctrlr_name": "nvme0", 00:05:34.437 "password": "test", 00:05:34.437 "method": "bdev_nvme_opal_revert", 00:05:34.437 "req_id": 1 00:05:34.437 } 00:05:34.437 Got JSON-RPC error response 00:05:34.437 response: 00:05:34.437 { 00:05:34.437 "code": -32603, 00:05:34.437 "message": "Internal error" 00:05:34.437 } 00:05:34.437 01:52:40 -- common/autotest_common.sh@1604 -- # true 00:05:34.437 01:52:40 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:34.437 01:52:40 -- common/autotest_common.sh@1608 -- # killprocess 1451918 00:05:34.437 01:52:40 -- common/autotest_common.sh@948 -- # '[' -z 1451918 ']' 00:05:34.437 01:52:40 -- common/autotest_common.sh@952 -- # kill -0 1451918 00:05:34.437 01:52:40 -- common/autotest_common.sh@953 -- # uname 00:05:34.437 01:52:40 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.438 01:52:40 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1451918 00:05:34.438 01:52:40 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.438 01:52:40 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.438 01:52:40 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1451918' 00:05:34.438 killing process with pid 1451918 00:05:34.438 01:52:40 -- common/autotest_common.sh@967 -- # kill 1451918 00:05:34.438 01:52:40 -- common/autotest_common.sh@972 -- # wait 1451918 00:05:36.333 01:52:41 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:36.333 01:52:41 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:36.333 01:52:41 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:36.333 01:52:41 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:36.333 01:52:41 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:36.333 01:52:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:36.333 01:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:36.333 01:52:41 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:36.333 01:52:41 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:36.334 01:52:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.334 01:52:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.334 01:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:36.334 ************************************ 00:05:36.334 START TEST env 00:05:36.334 ************************************ 00:05:36.334 01:52:41 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:36.334 * Looking for test storage... 00:05:36.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:36.334 01:52:41 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:36.334 01:52:41 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.334 01:52:41 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.334 01:52:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.334 ************************************ 00:05:36.334 START TEST env_memory 00:05:36.334 ************************************ 00:05:36.334 01:52:41 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:36.334 00:05:36.334 00:05:36.334 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.334 http://cunit.sourceforge.net/ 00:05:36.334 00:05:36.334 00:05:36.334 Suite: memory 00:05:36.334 Test: alloc and free memory map ...[2024-07-14 01:52:41.950517] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:36.334 passed 00:05:36.334 Test: mem map translation ...[2024-07-14 01:52:41.970484] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:36.334 [2024-07-14 01:52:41.970506] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:36.334 [2024-07-14 01:52:41.970556] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:36.334 [2024-07-14 01:52:41.970568] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:36.334 passed 00:05:36.334 Test: mem map registration ...[2024-07-14 01:52:42.011112] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:36.334 [2024-07-14 01:52:42.011133] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:36.334 passed 00:05:36.593 Test: mem map adjacent registrations ...passed 00:05:36.593 00:05:36.593 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.593 suites 1 1 n/a 0 0 00:05:36.593 tests 4 4 4 0 0 00:05:36.593 asserts 152 152 152 0 n/a 00:05:36.593 00:05:36.593 Elapsed time = 0.140 seconds 00:05:36.593 00:05:36.593 real 0m0.149s 00:05:36.593 user 0m0.141s 00:05:36.593 sys 0m0.007s 00:05:36.593 01:52:42 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.593 01:52:42 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:36.593 ************************************ 00:05:36.593 END TEST env_memory 00:05:36.593 ************************************ 00:05:36.593 01:52:42 env -- common/autotest_common.sh@1142 -- # return 0 00:05:36.593 01:52:42 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:36.593 01:52:42 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.593 01:52:42 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.593 01:52:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.593 ************************************ 00:05:36.593 START TEST env_vtophys 00:05:36.593 ************************************ 00:05:36.593 01:52:42 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:36.593 EAL: lib.eal log level changed from notice to debug 00:05:36.593 EAL: Detected lcore 0 as core 0 on socket 0 00:05:36.593 EAL: Detected lcore 1 as core 1 on socket 0 00:05:36.593 EAL: Detected lcore 2 as core 2 on socket 0 00:05:36.593 EAL: Detected lcore 3 as core 3 on socket 0 00:05:36.593 EAL: Detected lcore 4 as core 4 on socket 0 00:05:36.593 EAL: Detected lcore 5 as core 5 on socket 0 00:05:36.593 EAL: Detected lcore 6 as core 8 on socket 0 00:05:36.593 EAL: Detected lcore 7 as core 9 on socket 0 00:05:36.593 EAL: Detected lcore 8 as core 10 on socket 0 00:05:36.593 EAL: Detected lcore 9 as core 11 on socket 0 00:05:36.593 EAL: Detected lcore 10 as core 12 on socket 0 00:05:36.593 EAL: Detected lcore 11 as core 13 on socket 0 00:05:36.593 EAL: Detected lcore 12 as core 0 on socket 1 00:05:36.593 EAL: Detected lcore 13 as core 1 on socket 1 00:05:36.593 EAL: Detected lcore 14 as core 2 on socket 1 00:05:36.593 EAL: Detected lcore 15 as core 3 on socket 1 00:05:36.593 EAL: Detected lcore 16 as core 4 on socket 1 00:05:36.593 EAL: Detected lcore 17 as core 5 on socket 1 00:05:36.593 EAL: Detected lcore 18 as core 8 on socket 1 00:05:36.593 EAL: Detected lcore 19 as core 9 on socket 1 00:05:36.593 EAL: Detected lcore 20 as core 10 on socket 1 00:05:36.593 EAL: Detected lcore 21 as core 11 on socket 1 00:05:36.593 EAL: Detected lcore 22 as core 12 on socket 1 00:05:36.593 EAL: Detected lcore 23 as core 13 on socket 1 00:05:36.593 EAL: Detected lcore 24 as core 0 on socket 0 00:05:36.593 EAL: Detected lcore 25 as core 1 on socket 0 00:05:36.593 EAL: Detected lcore 26 as core 2 on socket 0 00:05:36.593 EAL: Detected lcore 27 as core 3 on socket 0 00:05:36.593 EAL: Detected lcore 28 as core 4 on socket 0 00:05:36.593 EAL: Detected lcore 29 as core 5 on socket 0 00:05:36.593 EAL: Detected lcore 30 as core 8 on socket 0 00:05:36.593 EAL: Detected lcore 31 as core 9 on socket 0 00:05:36.593 EAL: Detected lcore 32 as core 10 on socket 0 00:05:36.593 EAL: Detected lcore 33 as core 11 on socket 0 00:05:36.593 EAL: Detected lcore 34 as core 12 on socket 0 00:05:36.593 EAL: Detected lcore 35 as core 13 on socket 0 00:05:36.593 EAL: Detected lcore 36 as core 0 on socket 1 00:05:36.593 EAL: Detected lcore 37 as core 1 on socket 1 00:05:36.593 EAL: Detected lcore 38 as core 2 on socket 1 00:05:36.593 EAL: Detected lcore 39 as core 3 on socket 1 00:05:36.593 EAL: Detected lcore 40 as core 4 on socket 1 00:05:36.593 EAL: Detected lcore 41 as core 5 on socket 1 00:05:36.593 EAL: Detected lcore 42 as core 8 on socket 1 00:05:36.593 EAL: Detected lcore 43 as core 9 on socket 1 00:05:36.593 EAL: Detected lcore 44 as core 10 on socket 1 00:05:36.593 EAL: Detected lcore 45 as core 11 on socket 1 00:05:36.593 EAL: Detected lcore 46 as core 12 on socket 1 00:05:36.593 EAL: Detected lcore 47 as core 13 on socket 1 00:05:36.593 EAL: Maximum logical cores by configuration: 128 00:05:36.593 EAL: Detected CPU lcores: 48 00:05:36.593 EAL: Detected NUMA nodes: 2 00:05:36.593 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:36.593 EAL: Detected shared linkage of DPDK 00:05:36.593 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:36.593 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:36.593 EAL: Registered [vdev] bus. 00:05:36.593 EAL: bus.vdev log level changed from disabled to notice 00:05:36.593 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:36.593 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:36.593 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:36.593 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:36.593 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:36.593 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:36.593 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:36.593 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:36.593 EAL: No shared files mode enabled, IPC will be disabled 00:05:36.593 EAL: No shared files mode enabled, IPC is disabled 00:05:36.593 EAL: Bus pci wants IOVA as 'DC' 00:05:36.593 EAL: Bus vdev wants IOVA as 'DC' 00:05:36.593 EAL: Buses did not request a specific IOVA mode. 00:05:36.593 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:36.593 EAL: Selected IOVA mode 'VA' 00:05:36.593 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.593 EAL: Probing VFIO support... 00:05:36.593 EAL: IOMMU type 1 (Type 1) is supported 00:05:36.593 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:36.593 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:36.593 EAL: VFIO support initialized 00:05:36.593 EAL: Ask a virtual area of 0x2e000 bytes 00:05:36.593 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:36.593 EAL: Setting up physically contiguous memory... 00:05:36.593 EAL: Setting maximum number of open files to 524288 00:05:36.593 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:36.593 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:36.593 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:36.593 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.593 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:36.594 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.594 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.594 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:36.594 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:36.594 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.594 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:36.594 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.594 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.594 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:36.594 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:36.594 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.594 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:36.594 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.594 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.594 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:36.594 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:36.594 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.594 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:36.594 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.594 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.594 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:36.594 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:36.594 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:36.594 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.594 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:36.594 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.594 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.594 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:36.594 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:36.594 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.594 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:36.594 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.594 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.594 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:36.594 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:36.594 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.594 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:36.594 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.594 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.594 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:36.594 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:36.594 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.594 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:36.594 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.594 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.594 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:36.594 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:36.594 EAL: Hugepages will be freed exactly as allocated. 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: TSC frequency is ~2700000 KHz 00:05:36.594 EAL: Main lcore 0 is ready (tid=7f7ad15d9a00;cpuset=[0]) 00:05:36.594 EAL: Trying to obtain current memory policy. 00:05:36.594 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.594 EAL: Restoring previous memory policy: 0 00:05:36.594 EAL: request: mp_malloc_sync 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: Heap on socket 0 was expanded by 2MB 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:36.594 EAL: Mem event callback 'spdk:(nil)' registered 00:05:36.594 00:05:36.594 00:05:36.594 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.594 http://cunit.sourceforge.net/ 00:05:36.594 00:05:36.594 00:05:36.594 Suite: components_suite 00:05:36.594 Test: vtophys_malloc_test ...passed 00:05:36.594 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:36.594 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.594 EAL: Restoring previous memory policy: 4 00:05:36.594 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.594 EAL: request: mp_malloc_sync 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: Heap on socket 0 was expanded by 4MB 00:05:36.594 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.594 EAL: request: mp_malloc_sync 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: Heap on socket 0 was shrunk by 4MB 00:05:36.594 EAL: Trying to obtain current memory policy. 00:05:36.594 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.594 EAL: Restoring previous memory policy: 4 00:05:36.594 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.594 EAL: request: mp_malloc_sync 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: Heap on socket 0 was expanded by 6MB 00:05:36.594 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.594 EAL: request: mp_malloc_sync 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: Heap on socket 0 was shrunk by 6MB 00:05:36.594 EAL: Trying to obtain current memory policy. 00:05:36.594 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.594 EAL: Restoring previous memory policy: 4 00:05:36.594 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.594 EAL: request: mp_malloc_sync 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: Heap on socket 0 was expanded by 10MB 00:05:36.594 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.594 EAL: request: mp_malloc_sync 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: Heap on socket 0 was shrunk by 10MB 00:05:36.594 EAL: Trying to obtain current memory policy. 00:05:36.594 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.594 EAL: Restoring previous memory policy: 4 00:05:36.594 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.594 EAL: request: mp_malloc_sync 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: Heap on socket 0 was expanded by 18MB 00:05:36.594 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.594 EAL: request: mp_malloc_sync 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: Heap on socket 0 was shrunk by 18MB 00:05:36.594 EAL: Trying to obtain current memory policy. 00:05:36.594 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.594 EAL: Restoring previous memory policy: 4 00:05:36.594 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.594 EAL: request: mp_malloc_sync 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: Heap on socket 0 was expanded by 34MB 00:05:36.594 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.594 EAL: request: mp_malloc_sync 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: Heap on socket 0 was shrunk by 34MB 00:05:36.594 EAL: Trying to obtain current memory policy. 00:05:36.594 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.594 EAL: Restoring previous memory policy: 4 00:05:36.594 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.594 EAL: request: mp_malloc_sync 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: Heap on socket 0 was expanded by 66MB 00:05:36.594 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.594 EAL: request: mp_malloc_sync 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: Heap on socket 0 was shrunk by 66MB 00:05:36.594 EAL: Trying to obtain current memory policy. 00:05:36.594 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.853 EAL: Restoring previous memory policy: 4 00:05:36.853 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.853 EAL: request: mp_malloc_sync 00:05:36.853 EAL: No shared files mode enabled, IPC is disabled 00:05:36.853 EAL: Heap on socket 0 was expanded by 130MB 00:05:36.853 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.853 EAL: request: mp_malloc_sync 00:05:36.853 EAL: No shared files mode enabled, IPC is disabled 00:05:36.853 EAL: Heap on socket 0 was shrunk by 130MB 00:05:36.853 EAL: Trying to obtain current memory policy. 00:05:36.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.853 EAL: Restoring previous memory policy: 4 00:05:36.853 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.853 EAL: request: mp_malloc_sync 00:05:36.853 EAL: No shared files mode enabled, IPC is disabled 00:05:36.853 EAL: Heap on socket 0 was expanded by 258MB 00:05:36.853 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.853 EAL: request: mp_malloc_sync 00:05:36.853 EAL: No shared files mode enabled, IPC is disabled 00:05:36.853 EAL: Heap on socket 0 was shrunk by 258MB 00:05:36.853 EAL: Trying to obtain current memory policy. 00:05:36.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.111 EAL: Restoring previous memory policy: 4 00:05:37.111 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.111 EAL: request: mp_malloc_sync 00:05:37.111 EAL: No shared files mode enabled, IPC is disabled 00:05:37.111 EAL: Heap on socket 0 was expanded by 514MB 00:05:37.111 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.370 EAL: request: mp_malloc_sync 00:05:37.370 EAL: No shared files mode enabled, IPC is disabled 00:05:37.370 EAL: Heap on socket 0 was shrunk by 514MB 00:05:37.370 EAL: Trying to obtain current memory policy. 00:05:37.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.628 EAL: Restoring previous memory policy: 4 00:05:37.628 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.628 EAL: request: mp_malloc_sync 00:05:37.628 EAL: No shared files mode enabled, IPC is disabled 00:05:37.628 EAL: Heap on socket 0 was expanded by 1026MB 00:05:37.886 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.143 EAL: request: mp_malloc_sync 00:05:38.143 EAL: No shared files mode enabled, IPC is disabled 00:05:38.143 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:38.143 passed 00:05:38.143 00:05:38.143 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.143 suites 1 1 n/a 0 0 00:05:38.143 tests 2 2 2 0 0 00:05:38.143 asserts 497 497 497 0 n/a 00:05:38.143 00:05:38.143 Elapsed time = 1.377 seconds 00:05:38.143 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.143 EAL: request: mp_malloc_sync 00:05:38.143 EAL: No shared files mode enabled, IPC is disabled 00:05:38.143 EAL: Heap on socket 0 was shrunk by 2MB 00:05:38.143 EAL: No shared files mode enabled, IPC is disabled 00:05:38.143 EAL: No shared files mode enabled, IPC is disabled 00:05:38.143 EAL: No shared files mode enabled, IPC is disabled 00:05:38.143 00:05:38.143 real 0m1.496s 00:05:38.143 user 0m0.863s 00:05:38.143 sys 0m0.598s 00:05:38.143 01:52:43 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.143 01:52:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:38.143 ************************************ 00:05:38.143 END TEST env_vtophys 00:05:38.143 ************************************ 00:05:38.143 01:52:43 env -- common/autotest_common.sh@1142 -- # return 0 00:05:38.143 01:52:43 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.143 01:52:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.143 01:52:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.143 01:52:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.143 ************************************ 00:05:38.143 START TEST env_pci 00:05:38.143 ************************************ 00:05:38.143 01:52:43 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.143 00:05:38.143 00:05:38.143 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.143 http://cunit.sourceforge.net/ 00:05:38.143 00:05:38.143 00:05:38.143 Suite: pci 00:05:38.143 Test: pci_hook ...[2024-07-14 01:52:43.668046] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1452807 has claimed it 00:05:38.143 EAL: Cannot find device (10000:00:01.0) 00:05:38.143 EAL: Failed to attach device on primary process 00:05:38.143 passed 00:05:38.143 00:05:38.143 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.143 suites 1 1 n/a 0 0 00:05:38.143 tests 1 1 1 0 0 00:05:38.143 asserts 25 25 25 0 n/a 00:05:38.143 00:05:38.143 Elapsed time = 0.020 seconds 00:05:38.143 00:05:38.143 real 0m0.031s 00:05:38.143 user 0m0.011s 00:05:38.143 sys 0m0.020s 00:05:38.143 01:52:43 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.143 01:52:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:38.143 ************************************ 00:05:38.143 END TEST env_pci 00:05:38.143 ************************************ 00:05:38.143 01:52:43 env -- common/autotest_common.sh@1142 -- # return 0 00:05:38.143 01:52:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:38.143 01:52:43 env -- env/env.sh@15 -- # uname 00:05:38.143 01:52:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:38.143 01:52:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:38.143 01:52:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.143 01:52:43 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:38.143 01:52:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.143 01:52:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.143 ************************************ 00:05:38.143 START TEST env_dpdk_post_init 00:05:38.143 ************************************ 00:05:38.143 01:52:43 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.143 EAL: Detected CPU lcores: 48 00:05:38.143 EAL: Detected NUMA nodes: 2 00:05:38.143 EAL: Detected shared linkage of DPDK 00:05:38.143 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:38.143 EAL: Selected IOVA mode 'VA' 00:05:38.143 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.143 EAL: VFIO support initialized 00:05:38.143 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:38.401 EAL: Using IOMMU type 1 (Type 1) 00:05:38.401 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:38.401 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:38.401 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:38.401 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:38.401 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:38.401 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:38.401 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:38.401 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:38.401 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:38.401 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:38.401 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:38.401 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:38.401 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:38.401 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:38.401 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:38.401 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:39.333 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:42.611 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:42.611 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:42.611 Starting DPDK initialization... 00:05:42.611 Starting SPDK post initialization... 00:05:42.611 SPDK NVMe probe 00:05:42.611 Attaching to 0000:88:00.0 00:05:42.611 Attached to 0000:88:00.0 00:05:42.611 Cleaning up... 00:05:42.611 00:05:42.611 real 0m4.430s 00:05:42.611 user 0m3.326s 00:05:42.611 sys 0m0.165s 00:05:42.611 01:52:48 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.611 01:52:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:42.611 ************************************ 00:05:42.611 END TEST env_dpdk_post_init 00:05:42.611 ************************************ 00:05:42.611 01:52:48 env -- common/autotest_common.sh@1142 -- # return 0 00:05:42.611 01:52:48 env -- env/env.sh@26 -- # uname 00:05:42.611 01:52:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:42.611 01:52:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:42.611 01:52:48 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.611 01:52:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.611 01:52:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.611 ************************************ 00:05:42.611 START TEST env_mem_callbacks 00:05:42.611 ************************************ 00:05:42.611 01:52:48 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:42.611 EAL: Detected CPU lcores: 48 00:05:42.611 EAL: Detected NUMA nodes: 2 00:05:42.611 EAL: Detected shared linkage of DPDK 00:05:42.611 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:42.611 EAL: Selected IOVA mode 'VA' 00:05:42.611 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.611 EAL: VFIO support initialized 00:05:42.611 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:42.611 00:05:42.611 00:05:42.611 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.611 http://cunit.sourceforge.net/ 00:05:42.611 00:05:42.611 00:05:42.611 Suite: memory 00:05:42.611 Test: test ... 00:05:42.611 register 0x200000200000 2097152 00:05:42.611 malloc 3145728 00:05:42.611 register 0x200000400000 4194304 00:05:42.611 buf 0x200000500000 len 3145728 PASSED 00:05:42.611 malloc 64 00:05:42.611 buf 0x2000004fff40 len 64 PASSED 00:05:42.611 malloc 4194304 00:05:42.611 register 0x200000800000 6291456 00:05:42.611 buf 0x200000a00000 len 4194304 PASSED 00:05:42.611 free 0x200000500000 3145728 00:05:42.611 free 0x2000004fff40 64 00:05:42.611 unregister 0x200000400000 4194304 PASSED 00:05:42.611 free 0x200000a00000 4194304 00:05:42.611 unregister 0x200000800000 6291456 PASSED 00:05:42.611 malloc 8388608 00:05:42.611 register 0x200000400000 10485760 00:05:42.611 buf 0x200000600000 len 8388608 PASSED 00:05:42.611 free 0x200000600000 8388608 00:05:42.611 unregister 0x200000400000 10485760 PASSED 00:05:42.611 passed 00:05:42.611 00:05:42.611 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.611 suites 1 1 n/a 0 0 00:05:42.611 tests 1 1 1 0 0 00:05:42.611 asserts 15 15 15 0 n/a 00:05:42.611 00:05:42.611 Elapsed time = 0.005 seconds 00:05:42.611 00:05:42.611 real 0m0.049s 00:05:42.611 user 0m0.011s 00:05:42.611 sys 0m0.036s 00:05:42.611 01:52:48 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.611 01:52:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:42.611 ************************************ 00:05:42.611 END TEST env_mem_callbacks 00:05:42.611 ************************************ 00:05:42.611 01:52:48 env -- common/autotest_common.sh@1142 -- # return 0 00:05:42.611 00:05:42.611 real 0m6.451s 00:05:42.611 user 0m4.466s 00:05:42.611 sys 0m1.028s 00:05:42.611 01:52:48 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.611 01:52:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.611 ************************************ 00:05:42.611 END TEST env 00:05:42.611 ************************************ 00:05:42.869 01:52:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:42.869 01:52:48 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:42.869 01:52:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.869 01:52:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.869 01:52:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.869 ************************************ 00:05:42.869 START TEST rpc 00:05:42.869 ************************************ 00:05:42.869 01:52:48 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:42.869 * Looking for test storage... 00:05:42.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:42.869 01:52:48 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1453478 00:05:42.869 01:52:48 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:42.869 01:52:48 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.869 01:52:48 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1453478 00:05:42.869 01:52:48 rpc -- common/autotest_common.sh@829 -- # '[' -z 1453478 ']' 00:05:42.869 01:52:48 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.869 01:52:48 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.869 01:52:48 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.869 01:52:48 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.869 01:52:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.869 [2024-07-14 01:52:48.437587] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:42.869 [2024-07-14 01:52:48.437701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1453478 ] 00:05:42.869 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.869 [2024-07-14 01:52:48.500961] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.126 [2024-07-14 01:52:48.590638] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:43.126 [2024-07-14 01:52:48.590704] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1453478' to capture a snapshot of events at runtime. 00:05:43.126 [2024-07-14 01:52:48.590718] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:43.126 [2024-07-14 01:52:48.590729] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:43.126 [2024-07-14 01:52:48.590738] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1453478 for offline analysis/debug. 00:05:43.126 [2024-07-14 01:52:48.590786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.414 01:52:48 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.414 01:52:48 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:43.414 01:52:48 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:43.414 01:52:48 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:43.414 01:52:48 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:43.414 01:52:48 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:43.414 01:52:48 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.414 01:52:48 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.414 01:52:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.414 ************************************ 00:05:43.414 START TEST rpc_integrity 00:05:43.414 ************************************ 00:05:43.414 01:52:48 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:43.414 01:52:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:43.414 01:52:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.414 01:52:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.414 01:52:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.414 01:52:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:43.414 01:52:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:43.414 01:52:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:43.414 01:52:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:43.414 01:52:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.414 01:52:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.414 01:52:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.414 01:52:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:43.414 01:52:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:43.414 01:52:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.414 01:52:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.414 01:52:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.414 01:52:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:43.414 { 00:05:43.414 "name": "Malloc0", 00:05:43.414 "aliases": [ 00:05:43.414 "19274169-f45a-44a9-abe9-5d0f96d81f67" 00:05:43.414 ], 00:05:43.414 "product_name": "Malloc disk", 00:05:43.414 "block_size": 512, 00:05:43.414 "num_blocks": 16384, 00:05:43.414 "uuid": "19274169-f45a-44a9-abe9-5d0f96d81f67", 00:05:43.414 "assigned_rate_limits": { 00:05:43.414 "rw_ios_per_sec": 0, 00:05:43.414 "rw_mbytes_per_sec": 0, 00:05:43.414 "r_mbytes_per_sec": 0, 00:05:43.414 "w_mbytes_per_sec": 0 00:05:43.414 }, 00:05:43.414 "claimed": false, 00:05:43.414 "zoned": false, 00:05:43.414 "supported_io_types": { 00:05:43.414 "read": true, 00:05:43.414 "write": true, 00:05:43.414 "unmap": true, 00:05:43.414 "flush": true, 00:05:43.414 "reset": true, 00:05:43.414 "nvme_admin": false, 00:05:43.414 "nvme_io": false, 00:05:43.414 "nvme_io_md": false, 00:05:43.414 "write_zeroes": true, 00:05:43.414 "zcopy": true, 00:05:43.414 "get_zone_info": false, 00:05:43.414 "zone_management": false, 00:05:43.414 "zone_append": false, 00:05:43.414 "compare": false, 00:05:43.414 "compare_and_write": false, 00:05:43.414 "abort": true, 00:05:43.414 "seek_hole": false, 00:05:43.414 "seek_data": false, 00:05:43.414 "copy": true, 00:05:43.414 "nvme_iov_md": false 00:05:43.414 }, 00:05:43.414 "memory_domains": [ 00:05:43.414 { 00:05:43.414 "dma_device_id": "system", 00:05:43.414 "dma_device_type": 1 00:05:43.414 }, 00:05:43.414 { 00:05:43.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.414 "dma_device_type": 2 00:05:43.414 } 00:05:43.414 ], 00:05:43.414 "driver_specific": {} 00:05:43.414 } 00:05:43.414 ]' 00:05:43.414 01:52:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:43.414 01:52:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:43.414 01:52:48 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:43.414 01:52:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.414 01:52:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.414 [2024-07-14 01:52:48.977405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:43.414 [2024-07-14 01:52:48.977453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:43.414 [2024-07-14 01:52:48.977478] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1079bb0 00:05:43.414 [2024-07-14 01:52:48.977494] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:43.414 [2024-07-14 01:52:48.978979] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:43.414 [2024-07-14 01:52:48.979008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:43.414 Passthru0 00:05:43.414 01:52:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.414 01:52:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:43.414 01:52:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.414 01:52:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.414 01:52:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.414 01:52:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:43.414 { 00:05:43.414 "name": "Malloc0", 00:05:43.414 "aliases": [ 00:05:43.414 "19274169-f45a-44a9-abe9-5d0f96d81f67" 00:05:43.414 ], 00:05:43.414 "product_name": "Malloc disk", 00:05:43.414 "block_size": 512, 00:05:43.414 "num_blocks": 16384, 00:05:43.414 "uuid": "19274169-f45a-44a9-abe9-5d0f96d81f67", 00:05:43.414 "assigned_rate_limits": { 00:05:43.414 "rw_ios_per_sec": 0, 00:05:43.414 "rw_mbytes_per_sec": 0, 00:05:43.414 "r_mbytes_per_sec": 0, 00:05:43.414 "w_mbytes_per_sec": 0 00:05:43.414 }, 00:05:43.414 "claimed": true, 00:05:43.414 "claim_type": "exclusive_write", 00:05:43.414 "zoned": false, 00:05:43.414 "supported_io_types": { 00:05:43.414 "read": true, 00:05:43.414 "write": true, 00:05:43.414 "unmap": true, 00:05:43.414 "flush": true, 00:05:43.414 "reset": true, 00:05:43.414 "nvme_admin": false, 00:05:43.414 "nvme_io": false, 00:05:43.414 "nvme_io_md": false, 00:05:43.414 "write_zeroes": true, 00:05:43.414 "zcopy": true, 00:05:43.414 "get_zone_info": false, 00:05:43.414 "zone_management": false, 00:05:43.414 "zone_append": false, 00:05:43.414 "compare": false, 00:05:43.414 "compare_and_write": false, 00:05:43.414 "abort": true, 00:05:43.414 "seek_hole": false, 00:05:43.414 "seek_data": false, 00:05:43.414 "copy": true, 00:05:43.414 "nvme_iov_md": false 00:05:43.414 }, 00:05:43.414 "memory_domains": [ 00:05:43.414 { 00:05:43.414 "dma_device_id": "system", 00:05:43.414 "dma_device_type": 1 00:05:43.414 }, 00:05:43.414 { 00:05:43.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.414 "dma_device_type": 2 00:05:43.414 } 00:05:43.414 ], 00:05:43.414 "driver_specific": {} 00:05:43.414 }, 00:05:43.414 { 00:05:43.415 "name": "Passthru0", 00:05:43.415 "aliases": [ 00:05:43.415 "d0798717-eb24-59c5-b47b-c9a33b9b6630" 00:05:43.415 ], 00:05:43.415 "product_name": "passthru", 00:05:43.415 "block_size": 512, 00:05:43.415 "num_blocks": 16384, 00:05:43.415 "uuid": "d0798717-eb24-59c5-b47b-c9a33b9b6630", 00:05:43.415 "assigned_rate_limits": { 00:05:43.415 "rw_ios_per_sec": 0, 00:05:43.415 "rw_mbytes_per_sec": 0, 00:05:43.415 "r_mbytes_per_sec": 0, 00:05:43.415 "w_mbytes_per_sec": 0 00:05:43.415 }, 00:05:43.415 "claimed": false, 00:05:43.415 "zoned": false, 00:05:43.415 "supported_io_types": { 00:05:43.415 "read": true, 00:05:43.415 "write": true, 00:05:43.415 "unmap": true, 00:05:43.415 "flush": true, 00:05:43.415 "reset": true, 00:05:43.415 "nvme_admin": false, 00:05:43.415 "nvme_io": false, 00:05:43.415 "nvme_io_md": false, 00:05:43.415 "write_zeroes": true, 00:05:43.415 "zcopy": true, 00:05:43.415 "get_zone_info": false, 00:05:43.415 "zone_management": false, 00:05:43.415 "zone_append": false, 00:05:43.415 "compare": false, 00:05:43.415 "compare_and_write": false, 00:05:43.415 "abort": true, 00:05:43.415 "seek_hole": false, 00:05:43.415 "seek_data": false, 00:05:43.415 "copy": true, 00:05:43.415 "nvme_iov_md": false 00:05:43.415 }, 00:05:43.415 "memory_domains": [ 00:05:43.415 { 00:05:43.415 "dma_device_id": "system", 00:05:43.415 "dma_device_type": 1 00:05:43.415 }, 00:05:43.415 { 00:05:43.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.415 "dma_device_type": 2 00:05:43.415 } 00:05:43.415 ], 00:05:43.415 "driver_specific": { 00:05:43.415 "passthru": { 00:05:43.415 "name": "Passthru0", 00:05:43.415 "base_bdev_name": "Malloc0" 00:05:43.415 } 00:05:43.415 } 00:05:43.415 } 00:05:43.415 ]' 00:05:43.415 01:52:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:43.415 01:52:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:43.415 01:52:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:43.415 01:52:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.415 01:52:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.415 01:52:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.415 01:52:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:43.415 01:52:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.415 01:52:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.415 01:52:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.415 01:52:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:43.415 01:52:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.415 01:52:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.415 01:52:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.415 01:52:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:43.415 01:52:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:43.415 01:52:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:43.415 00:05:43.415 real 0m0.224s 00:05:43.415 user 0m0.144s 00:05:43.415 sys 0m0.024s 00:05:43.415 01:52:49 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.415 01:52:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.415 ************************************ 00:05:43.415 END TEST rpc_integrity 00:05:43.415 ************************************ 00:05:43.685 01:52:49 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.685 01:52:49 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:43.685 01:52:49 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.685 01:52:49 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.685 01:52:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.685 ************************************ 00:05:43.685 START TEST rpc_plugins 00:05:43.685 ************************************ 00:05:43.685 01:52:49 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:43.685 01:52:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:43.685 01:52:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.685 01:52:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.685 01:52:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.685 01:52:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:43.685 01:52:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:43.685 01:52:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.685 01:52:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.685 01:52:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.685 01:52:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:43.685 { 00:05:43.685 "name": "Malloc1", 00:05:43.685 "aliases": [ 00:05:43.685 "3bc06db2-a22c-45cb-b95a-6848603b8a96" 00:05:43.685 ], 00:05:43.685 "product_name": "Malloc disk", 00:05:43.685 "block_size": 4096, 00:05:43.685 "num_blocks": 256, 00:05:43.685 "uuid": "3bc06db2-a22c-45cb-b95a-6848603b8a96", 00:05:43.685 "assigned_rate_limits": { 00:05:43.685 "rw_ios_per_sec": 0, 00:05:43.685 "rw_mbytes_per_sec": 0, 00:05:43.685 "r_mbytes_per_sec": 0, 00:05:43.685 "w_mbytes_per_sec": 0 00:05:43.685 }, 00:05:43.685 "claimed": false, 00:05:43.685 "zoned": false, 00:05:43.685 "supported_io_types": { 00:05:43.685 "read": true, 00:05:43.685 "write": true, 00:05:43.685 "unmap": true, 00:05:43.685 "flush": true, 00:05:43.685 "reset": true, 00:05:43.685 "nvme_admin": false, 00:05:43.685 "nvme_io": false, 00:05:43.685 "nvme_io_md": false, 00:05:43.685 "write_zeroes": true, 00:05:43.685 "zcopy": true, 00:05:43.685 "get_zone_info": false, 00:05:43.685 "zone_management": false, 00:05:43.685 "zone_append": false, 00:05:43.685 "compare": false, 00:05:43.685 "compare_and_write": false, 00:05:43.685 "abort": true, 00:05:43.685 "seek_hole": false, 00:05:43.685 "seek_data": false, 00:05:43.685 "copy": true, 00:05:43.685 "nvme_iov_md": false 00:05:43.685 }, 00:05:43.685 "memory_domains": [ 00:05:43.685 { 00:05:43.685 "dma_device_id": "system", 00:05:43.685 "dma_device_type": 1 00:05:43.685 }, 00:05:43.685 { 00:05:43.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.685 "dma_device_type": 2 00:05:43.685 } 00:05:43.685 ], 00:05:43.685 "driver_specific": {} 00:05:43.685 } 00:05:43.685 ]' 00:05:43.685 01:52:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:43.685 01:52:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:43.685 01:52:49 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:43.685 01:52:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.685 01:52:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.685 01:52:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.685 01:52:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:43.685 01:52:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.685 01:52:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.685 01:52:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.685 01:52:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:43.685 01:52:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:43.685 01:52:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:43.685 00:05:43.685 real 0m0.110s 00:05:43.685 user 0m0.074s 00:05:43.685 sys 0m0.010s 00:05:43.685 01:52:49 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.685 01:52:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.685 ************************************ 00:05:43.685 END TEST rpc_plugins 00:05:43.685 ************************************ 00:05:43.685 01:52:49 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.685 01:52:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:43.685 01:52:49 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.685 01:52:49 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.685 01:52:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.685 ************************************ 00:05:43.685 START TEST rpc_trace_cmd_test 00:05:43.685 ************************************ 00:05:43.685 01:52:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:43.685 01:52:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:43.685 01:52:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:43.685 01:52:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.685 01:52:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.685 01:52:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.685 01:52:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:43.685 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1453478", 00:05:43.685 "tpoint_group_mask": "0x8", 00:05:43.685 "iscsi_conn": { 00:05:43.685 "mask": "0x2", 00:05:43.685 "tpoint_mask": "0x0" 00:05:43.685 }, 00:05:43.685 "scsi": { 00:05:43.685 "mask": "0x4", 00:05:43.685 "tpoint_mask": "0x0" 00:05:43.685 }, 00:05:43.685 "bdev": { 00:05:43.685 "mask": "0x8", 00:05:43.685 "tpoint_mask": "0xffffffffffffffff" 00:05:43.685 }, 00:05:43.685 "nvmf_rdma": { 00:05:43.685 "mask": "0x10", 00:05:43.685 "tpoint_mask": "0x0" 00:05:43.685 }, 00:05:43.685 "nvmf_tcp": { 00:05:43.685 "mask": "0x20", 00:05:43.685 "tpoint_mask": "0x0" 00:05:43.685 }, 00:05:43.685 "ftl": { 00:05:43.685 "mask": "0x40", 00:05:43.685 "tpoint_mask": "0x0" 00:05:43.685 }, 00:05:43.685 "blobfs": { 00:05:43.685 "mask": "0x80", 00:05:43.685 "tpoint_mask": "0x0" 00:05:43.685 }, 00:05:43.685 "dsa": { 00:05:43.685 "mask": "0x200", 00:05:43.685 "tpoint_mask": "0x0" 00:05:43.685 }, 00:05:43.685 "thread": { 00:05:43.685 "mask": "0x400", 00:05:43.685 "tpoint_mask": "0x0" 00:05:43.685 }, 00:05:43.685 "nvme_pcie": { 00:05:43.685 "mask": "0x800", 00:05:43.685 "tpoint_mask": "0x0" 00:05:43.685 }, 00:05:43.685 "iaa": { 00:05:43.685 "mask": "0x1000", 00:05:43.685 "tpoint_mask": "0x0" 00:05:43.685 }, 00:05:43.685 "nvme_tcp": { 00:05:43.685 "mask": "0x2000", 00:05:43.685 "tpoint_mask": "0x0" 00:05:43.685 }, 00:05:43.685 "bdev_nvme": { 00:05:43.685 "mask": "0x4000", 00:05:43.685 "tpoint_mask": "0x0" 00:05:43.685 }, 00:05:43.685 "sock": { 00:05:43.685 "mask": "0x8000", 00:05:43.685 "tpoint_mask": "0x0" 00:05:43.685 } 00:05:43.685 }' 00:05:43.685 01:52:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:43.685 01:52:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:43.685 01:52:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:43.968 01:52:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:43.968 01:52:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:43.968 01:52:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:43.968 01:52:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:43.968 01:52:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:43.968 01:52:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:43.968 01:52:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:43.968 00:05:43.968 real 0m0.198s 00:05:43.968 user 0m0.172s 00:05:43.968 sys 0m0.018s 00:05:43.968 01:52:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.968 01:52:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.968 ************************************ 00:05:43.968 END TEST rpc_trace_cmd_test 00:05:43.968 ************************************ 00:05:43.968 01:52:49 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.968 01:52:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:43.968 01:52:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:43.968 01:52:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:43.968 01:52:49 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.968 01:52:49 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.968 01:52:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.968 ************************************ 00:05:43.968 START TEST rpc_daemon_integrity 00:05:43.968 ************************************ 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:43.968 { 00:05:43.968 "name": "Malloc2", 00:05:43.968 "aliases": [ 00:05:43.968 "4b7e48fc-2096-4584-8fb0-22d1198ca89b" 00:05:43.968 ], 00:05:43.968 "product_name": "Malloc disk", 00:05:43.968 "block_size": 512, 00:05:43.968 "num_blocks": 16384, 00:05:43.968 "uuid": "4b7e48fc-2096-4584-8fb0-22d1198ca89b", 00:05:43.968 "assigned_rate_limits": { 00:05:43.968 "rw_ios_per_sec": 0, 00:05:43.968 "rw_mbytes_per_sec": 0, 00:05:43.968 "r_mbytes_per_sec": 0, 00:05:43.968 "w_mbytes_per_sec": 0 00:05:43.968 }, 00:05:43.968 "claimed": false, 00:05:43.968 "zoned": false, 00:05:43.968 "supported_io_types": { 00:05:43.968 "read": true, 00:05:43.968 "write": true, 00:05:43.968 "unmap": true, 00:05:43.968 "flush": true, 00:05:43.968 "reset": true, 00:05:43.968 "nvme_admin": false, 00:05:43.968 "nvme_io": false, 00:05:43.968 "nvme_io_md": false, 00:05:43.968 "write_zeroes": true, 00:05:43.968 "zcopy": true, 00:05:43.968 "get_zone_info": false, 00:05:43.968 "zone_management": false, 00:05:43.968 "zone_append": false, 00:05:43.968 "compare": false, 00:05:43.968 "compare_and_write": false, 00:05:43.968 "abort": true, 00:05:43.968 "seek_hole": false, 00:05:43.968 "seek_data": false, 00:05:43.968 "copy": true, 00:05:43.968 "nvme_iov_md": false 00:05:43.968 }, 00:05:43.968 "memory_domains": [ 00:05:43.968 { 00:05:43.968 "dma_device_id": "system", 00:05:43.968 "dma_device_type": 1 00:05:43.968 }, 00:05:43.968 { 00:05:43.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.968 "dma_device_type": 2 00:05:43.968 } 00:05:43.968 ], 00:05:43.968 "driver_specific": {} 00:05:43.968 } 00:05:43.968 ]' 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.968 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.968 [2024-07-14 01:52:49.655982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:43.968 [2024-07-14 01:52:49.656023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:43.968 [2024-07-14 01:52:49.656050] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x107a5b0 00:05:43.968 [2024-07-14 01:52:49.656065] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:43.968 [2024-07-14 01:52:49.657444] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:43.968 [2024-07-14 01:52:49.657475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:43.968 Passthru0 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:44.226 { 00:05:44.226 "name": "Malloc2", 00:05:44.226 "aliases": [ 00:05:44.226 "4b7e48fc-2096-4584-8fb0-22d1198ca89b" 00:05:44.226 ], 00:05:44.226 "product_name": "Malloc disk", 00:05:44.226 "block_size": 512, 00:05:44.226 "num_blocks": 16384, 00:05:44.226 "uuid": "4b7e48fc-2096-4584-8fb0-22d1198ca89b", 00:05:44.226 "assigned_rate_limits": { 00:05:44.226 "rw_ios_per_sec": 0, 00:05:44.226 "rw_mbytes_per_sec": 0, 00:05:44.226 "r_mbytes_per_sec": 0, 00:05:44.226 "w_mbytes_per_sec": 0 00:05:44.226 }, 00:05:44.226 "claimed": true, 00:05:44.226 "claim_type": "exclusive_write", 00:05:44.226 "zoned": false, 00:05:44.226 "supported_io_types": { 00:05:44.226 "read": true, 00:05:44.226 "write": true, 00:05:44.226 "unmap": true, 00:05:44.226 "flush": true, 00:05:44.226 "reset": true, 00:05:44.226 "nvme_admin": false, 00:05:44.226 "nvme_io": false, 00:05:44.226 "nvme_io_md": false, 00:05:44.226 "write_zeroes": true, 00:05:44.226 "zcopy": true, 00:05:44.226 "get_zone_info": false, 00:05:44.226 "zone_management": false, 00:05:44.226 "zone_append": false, 00:05:44.226 "compare": false, 00:05:44.226 "compare_and_write": false, 00:05:44.226 "abort": true, 00:05:44.226 "seek_hole": false, 00:05:44.226 "seek_data": false, 00:05:44.226 "copy": true, 00:05:44.226 "nvme_iov_md": false 00:05:44.226 }, 00:05:44.226 "memory_domains": [ 00:05:44.226 { 00:05:44.226 "dma_device_id": "system", 00:05:44.226 "dma_device_type": 1 00:05:44.226 }, 00:05:44.226 { 00:05:44.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.226 "dma_device_type": 2 00:05:44.226 } 00:05:44.226 ], 00:05:44.226 "driver_specific": {} 00:05:44.226 }, 00:05:44.226 { 00:05:44.226 "name": "Passthru0", 00:05:44.226 "aliases": [ 00:05:44.226 "255fad3a-0e6c-5125-85e7-618e7345676b" 00:05:44.226 ], 00:05:44.226 "product_name": "passthru", 00:05:44.226 "block_size": 512, 00:05:44.226 "num_blocks": 16384, 00:05:44.226 "uuid": "255fad3a-0e6c-5125-85e7-618e7345676b", 00:05:44.226 "assigned_rate_limits": { 00:05:44.226 "rw_ios_per_sec": 0, 00:05:44.226 "rw_mbytes_per_sec": 0, 00:05:44.226 "r_mbytes_per_sec": 0, 00:05:44.226 "w_mbytes_per_sec": 0 00:05:44.226 }, 00:05:44.226 "claimed": false, 00:05:44.226 "zoned": false, 00:05:44.226 "supported_io_types": { 00:05:44.226 "read": true, 00:05:44.226 "write": true, 00:05:44.226 "unmap": true, 00:05:44.226 "flush": true, 00:05:44.226 "reset": true, 00:05:44.226 "nvme_admin": false, 00:05:44.226 "nvme_io": false, 00:05:44.226 "nvme_io_md": false, 00:05:44.226 "write_zeroes": true, 00:05:44.226 "zcopy": true, 00:05:44.226 "get_zone_info": false, 00:05:44.226 "zone_management": false, 00:05:44.226 "zone_append": false, 00:05:44.226 "compare": false, 00:05:44.226 "compare_and_write": false, 00:05:44.226 "abort": true, 00:05:44.226 "seek_hole": false, 00:05:44.226 "seek_data": false, 00:05:44.226 "copy": true, 00:05:44.226 "nvme_iov_md": false 00:05:44.226 }, 00:05:44.226 "memory_domains": [ 00:05:44.226 { 00:05:44.226 "dma_device_id": "system", 00:05:44.226 "dma_device_type": 1 00:05:44.226 }, 00:05:44.226 { 00:05:44.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.226 "dma_device_type": 2 00:05:44.226 } 00:05:44.226 ], 00:05:44.226 "driver_specific": { 00:05:44.226 "passthru": { 00:05:44.226 "name": "Passthru0", 00:05:44.226 "base_bdev_name": "Malloc2" 00:05:44.226 } 00:05:44.226 } 00:05:44.226 } 00:05:44.226 ]' 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:44.226 00:05:44.226 real 0m0.226s 00:05:44.226 user 0m0.140s 00:05:44.226 sys 0m0.030s 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.226 01:52:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.226 ************************************ 00:05:44.226 END TEST rpc_daemon_integrity 00:05:44.226 ************************************ 00:05:44.226 01:52:49 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:44.226 01:52:49 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:44.226 01:52:49 rpc -- rpc/rpc.sh@84 -- # killprocess 1453478 00:05:44.226 01:52:49 rpc -- common/autotest_common.sh@948 -- # '[' -z 1453478 ']' 00:05:44.226 01:52:49 rpc -- common/autotest_common.sh@952 -- # kill -0 1453478 00:05:44.226 01:52:49 rpc -- common/autotest_common.sh@953 -- # uname 00:05:44.226 01:52:49 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.226 01:52:49 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1453478 00:05:44.226 01:52:49 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.226 01:52:49 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.227 01:52:49 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1453478' 00:05:44.227 killing process with pid 1453478 00:05:44.227 01:52:49 rpc -- common/autotest_common.sh@967 -- # kill 1453478 00:05:44.227 01:52:49 rpc -- common/autotest_common.sh@972 -- # wait 1453478 00:05:44.792 00:05:44.792 real 0m1.905s 00:05:44.792 user 0m2.345s 00:05:44.792 sys 0m0.640s 00:05:44.792 01:52:50 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.792 01:52:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.792 ************************************ 00:05:44.792 END TEST rpc 00:05:44.792 ************************************ 00:05:44.792 01:52:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.792 01:52:50 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:44.792 01:52:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.792 01:52:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.792 01:52:50 -- common/autotest_common.sh@10 -- # set +x 00:05:44.792 ************************************ 00:05:44.792 START TEST skip_rpc 00:05:44.792 ************************************ 00:05:44.792 01:52:50 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:44.792 * Looking for test storage... 00:05:44.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:44.792 01:52:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:44.792 01:52:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:44.792 01:52:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:44.792 01:52:50 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.792 01:52:50 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.792 01:52:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.792 ************************************ 00:05:44.792 START TEST skip_rpc 00:05:44.792 ************************************ 00:05:44.792 01:52:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:44.792 01:52:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1453897 00:05:44.792 01:52:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:44.792 01:52:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.792 01:52:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:44.792 [2024-07-14 01:52:50.417757] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:44.792 [2024-07-14 01:52:50.417835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1453897 ] 00:05:44.792 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.792 [2024-07-14 01:52:50.480020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.050 [2024-07-14 01:52:50.569602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1453897 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1453897 ']' 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1453897 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1453897 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1453897' 00:05:50.310 killing process with pid 1453897 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1453897 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1453897 00:05:50.310 00:05:50.310 real 0m5.458s 00:05:50.310 user 0m5.150s 00:05:50.310 sys 0m0.311s 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.310 01:52:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.310 ************************************ 00:05:50.310 END TEST skip_rpc 00:05:50.310 ************************************ 00:05:50.310 01:52:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:50.310 01:52:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:50.310 01:52:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.310 01:52:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.310 01:52:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.310 ************************************ 00:05:50.310 START TEST skip_rpc_with_json 00:05:50.310 ************************************ 00:05:50.310 01:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:50.310 01:52:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:50.311 01:52:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1454588 00:05:50.311 01:52:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.311 01:52:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.311 01:52:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1454588 00:05:50.311 01:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1454588 ']' 00:05:50.311 01:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.311 01:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.311 01:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.311 01:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.311 01:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.311 [2024-07-14 01:52:55.926084] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:50.311 [2024-07-14 01:52:55.926178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1454588 ] 00:05:50.311 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.311 [2024-07-14 01:52:55.990374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.569 [2024-07-14 01:52:56.077779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.828 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.828 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:50.828 01:52:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:50.828 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.828 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.828 [2024-07-14 01:52:56.334470] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:50.828 request: 00:05:50.828 { 00:05:50.828 "trtype": "tcp", 00:05:50.828 "method": "nvmf_get_transports", 00:05:50.828 "req_id": 1 00:05:50.828 } 00:05:50.828 Got JSON-RPC error response 00:05:50.828 response: 00:05:50.828 { 00:05:50.828 "code": -19, 00:05:50.828 "message": "No such device" 00:05:50.828 } 00:05:50.828 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:50.828 01:52:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:50.828 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.828 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.828 [2024-07-14 01:52:56.342603] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.828 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.828 01:52:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:50.828 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.828 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.828 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.828 01:52:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:50.828 { 00:05:50.828 "subsystems": [ 00:05:50.828 { 00:05:50.828 "subsystem": "vfio_user_target", 00:05:50.828 "config": null 00:05:50.828 }, 00:05:50.828 { 00:05:50.828 "subsystem": "keyring", 00:05:50.828 "config": [] 00:05:50.828 }, 00:05:50.828 { 00:05:50.828 "subsystem": "iobuf", 00:05:50.828 "config": [ 00:05:50.828 { 00:05:50.828 "method": "iobuf_set_options", 00:05:50.828 "params": { 00:05:50.828 "small_pool_count": 8192, 00:05:50.828 "large_pool_count": 1024, 00:05:50.828 "small_bufsize": 8192, 00:05:50.828 "large_bufsize": 135168 00:05:50.828 } 00:05:50.828 } 00:05:50.828 ] 00:05:50.828 }, 00:05:50.828 { 00:05:50.828 "subsystem": "sock", 00:05:50.828 "config": [ 00:05:50.828 { 00:05:50.828 "method": "sock_set_default_impl", 00:05:50.828 "params": { 00:05:50.828 "impl_name": "posix" 00:05:50.828 } 00:05:50.828 }, 00:05:50.828 { 00:05:50.828 "method": "sock_impl_set_options", 00:05:50.828 "params": { 00:05:50.828 "impl_name": "ssl", 00:05:50.828 "recv_buf_size": 4096, 00:05:50.828 "send_buf_size": 4096, 00:05:50.828 "enable_recv_pipe": true, 00:05:50.828 "enable_quickack": false, 00:05:50.828 "enable_placement_id": 0, 00:05:50.828 "enable_zerocopy_send_server": true, 00:05:50.828 "enable_zerocopy_send_client": false, 00:05:50.828 "zerocopy_threshold": 0, 00:05:50.828 "tls_version": 0, 00:05:50.828 "enable_ktls": false 00:05:50.828 } 00:05:50.828 }, 00:05:50.828 { 00:05:50.828 "method": "sock_impl_set_options", 00:05:50.828 "params": { 00:05:50.828 "impl_name": "posix", 00:05:50.828 "recv_buf_size": 2097152, 00:05:50.828 "send_buf_size": 2097152, 00:05:50.828 "enable_recv_pipe": true, 00:05:50.828 "enable_quickack": false, 00:05:50.828 "enable_placement_id": 0, 00:05:50.828 "enable_zerocopy_send_server": true, 00:05:50.828 "enable_zerocopy_send_client": false, 00:05:50.828 "zerocopy_threshold": 0, 00:05:50.828 "tls_version": 0, 00:05:50.828 "enable_ktls": false 00:05:50.828 } 00:05:50.828 } 00:05:50.828 ] 00:05:50.828 }, 00:05:50.828 { 00:05:50.828 "subsystem": "vmd", 00:05:50.828 "config": [] 00:05:50.828 }, 00:05:50.828 { 00:05:50.828 "subsystem": "accel", 00:05:50.828 "config": [ 00:05:50.828 { 00:05:50.828 "method": "accel_set_options", 00:05:50.828 "params": { 00:05:50.828 "small_cache_size": 128, 00:05:50.828 "large_cache_size": 16, 00:05:50.828 "task_count": 2048, 00:05:50.828 "sequence_count": 2048, 00:05:50.828 "buf_count": 2048 00:05:50.828 } 00:05:50.828 } 00:05:50.828 ] 00:05:50.828 }, 00:05:50.828 { 00:05:50.828 "subsystem": "bdev", 00:05:50.828 "config": [ 00:05:50.828 { 00:05:50.828 "method": "bdev_set_options", 00:05:50.828 "params": { 00:05:50.828 "bdev_io_pool_size": 65535, 00:05:50.828 "bdev_io_cache_size": 256, 00:05:50.828 "bdev_auto_examine": true, 00:05:50.828 "iobuf_small_cache_size": 128, 00:05:50.828 "iobuf_large_cache_size": 16 00:05:50.828 } 00:05:50.828 }, 00:05:50.828 { 00:05:50.828 "method": "bdev_raid_set_options", 00:05:50.828 "params": { 00:05:50.828 "process_window_size_kb": 1024 00:05:50.828 } 00:05:50.828 }, 00:05:50.828 { 00:05:50.828 "method": "bdev_iscsi_set_options", 00:05:50.828 "params": { 00:05:50.828 "timeout_sec": 30 00:05:50.828 } 00:05:50.828 }, 00:05:50.828 { 00:05:50.828 "method": "bdev_nvme_set_options", 00:05:50.828 "params": { 00:05:50.828 "action_on_timeout": "none", 00:05:50.828 "timeout_us": 0, 00:05:50.828 "timeout_admin_us": 0, 00:05:50.828 "keep_alive_timeout_ms": 10000, 00:05:50.828 "arbitration_burst": 0, 00:05:50.828 "low_priority_weight": 0, 00:05:50.828 "medium_priority_weight": 0, 00:05:50.828 "high_priority_weight": 0, 00:05:50.828 "nvme_adminq_poll_period_us": 10000, 00:05:50.828 "nvme_ioq_poll_period_us": 0, 00:05:50.828 "io_queue_requests": 0, 00:05:50.828 "delay_cmd_submit": true, 00:05:50.828 "transport_retry_count": 4, 00:05:50.828 "bdev_retry_count": 3, 00:05:50.828 "transport_ack_timeout": 0, 00:05:50.828 "ctrlr_loss_timeout_sec": 0, 00:05:50.828 "reconnect_delay_sec": 0, 00:05:50.828 "fast_io_fail_timeout_sec": 0, 00:05:50.828 "disable_auto_failback": false, 00:05:50.829 "generate_uuids": false, 00:05:50.829 "transport_tos": 0, 00:05:50.829 "nvme_error_stat": false, 00:05:50.829 "rdma_srq_size": 0, 00:05:50.829 "io_path_stat": false, 00:05:50.829 "allow_accel_sequence": false, 00:05:50.829 "rdma_max_cq_size": 0, 00:05:50.829 "rdma_cm_event_timeout_ms": 0, 00:05:50.829 "dhchap_digests": [ 00:05:50.829 "sha256", 00:05:50.829 "sha384", 00:05:50.829 "sha512" 00:05:50.829 ], 00:05:50.829 "dhchap_dhgroups": [ 00:05:50.829 "null", 00:05:50.829 "ffdhe2048", 00:05:50.829 "ffdhe3072", 00:05:50.829 "ffdhe4096", 00:05:50.829 "ffdhe6144", 00:05:50.829 "ffdhe8192" 00:05:50.829 ] 00:05:50.829 } 00:05:50.829 }, 00:05:50.829 { 00:05:50.829 "method": "bdev_nvme_set_hotplug", 00:05:50.829 "params": { 00:05:50.829 "period_us": 100000, 00:05:50.829 "enable": false 00:05:50.829 } 00:05:50.829 }, 00:05:50.829 { 00:05:50.829 "method": "bdev_wait_for_examine" 00:05:50.829 } 00:05:50.829 ] 00:05:50.829 }, 00:05:50.829 { 00:05:50.829 "subsystem": "scsi", 00:05:50.829 "config": null 00:05:50.829 }, 00:05:50.829 { 00:05:50.829 "subsystem": "scheduler", 00:05:50.829 "config": [ 00:05:50.829 { 00:05:50.829 "method": "framework_set_scheduler", 00:05:50.829 "params": { 00:05:50.829 "name": "static" 00:05:50.829 } 00:05:50.829 } 00:05:50.829 ] 00:05:50.829 }, 00:05:50.829 { 00:05:50.829 "subsystem": "vhost_scsi", 00:05:50.829 "config": [] 00:05:50.829 }, 00:05:50.829 { 00:05:50.829 "subsystem": "vhost_blk", 00:05:50.829 "config": [] 00:05:50.829 }, 00:05:50.829 { 00:05:50.829 "subsystem": "ublk", 00:05:50.829 "config": [] 00:05:50.829 }, 00:05:50.829 { 00:05:50.829 "subsystem": "nbd", 00:05:50.829 "config": [] 00:05:50.829 }, 00:05:50.829 { 00:05:50.829 "subsystem": "nvmf", 00:05:50.829 "config": [ 00:05:50.829 { 00:05:50.829 "method": "nvmf_set_config", 00:05:50.829 "params": { 00:05:50.829 "discovery_filter": "match_any", 00:05:50.829 "admin_cmd_passthru": { 00:05:50.829 "identify_ctrlr": false 00:05:50.829 } 00:05:50.829 } 00:05:50.829 }, 00:05:50.829 { 00:05:50.829 "method": "nvmf_set_max_subsystems", 00:05:50.829 "params": { 00:05:50.829 "max_subsystems": 1024 00:05:50.829 } 00:05:50.829 }, 00:05:50.829 { 00:05:50.829 "method": "nvmf_set_crdt", 00:05:50.829 "params": { 00:05:50.829 "crdt1": 0, 00:05:50.829 "crdt2": 0, 00:05:50.829 "crdt3": 0 00:05:50.829 } 00:05:50.829 }, 00:05:50.829 { 00:05:50.829 "method": "nvmf_create_transport", 00:05:50.829 "params": { 00:05:50.829 "trtype": "TCP", 00:05:50.829 "max_queue_depth": 128, 00:05:50.829 "max_io_qpairs_per_ctrlr": 127, 00:05:50.829 "in_capsule_data_size": 4096, 00:05:50.829 "max_io_size": 131072, 00:05:50.829 "io_unit_size": 131072, 00:05:50.829 "max_aq_depth": 128, 00:05:50.829 "num_shared_buffers": 511, 00:05:50.829 "buf_cache_size": 4294967295, 00:05:50.829 "dif_insert_or_strip": false, 00:05:50.829 "zcopy": false, 00:05:50.829 "c2h_success": true, 00:05:50.829 "sock_priority": 0, 00:05:50.829 "abort_timeout_sec": 1, 00:05:50.829 "ack_timeout": 0, 00:05:50.829 "data_wr_pool_size": 0 00:05:50.829 } 00:05:50.829 } 00:05:50.829 ] 00:05:50.829 }, 00:05:50.829 { 00:05:50.829 "subsystem": "iscsi", 00:05:50.829 "config": [ 00:05:50.829 { 00:05:50.829 "method": "iscsi_set_options", 00:05:50.829 "params": { 00:05:50.829 "node_base": "iqn.2016-06.io.spdk", 00:05:50.829 "max_sessions": 128, 00:05:50.829 "max_connections_per_session": 2, 00:05:50.829 "max_queue_depth": 64, 00:05:50.829 "default_time2wait": 2, 00:05:50.829 "default_time2retain": 20, 00:05:50.829 "first_burst_length": 8192, 00:05:50.829 "immediate_data": true, 00:05:50.829 "allow_duplicated_isid": false, 00:05:50.829 "error_recovery_level": 0, 00:05:50.829 "nop_timeout": 60, 00:05:50.829 "nop_in_interval": 30, 00:05:50.829 "disable_chap": false, 00:05:50.829 "require_chap": false, 00:05:50.829 "mutual_chap": false, 00:05:50.829 "chap_group": 0, 00:05:50.829 "max_large_datain_per_connection": 64, 00:05:50.829 "max_r2t_per_connection": 4, 00:05:50.829 "pdu_pool_size": 36864, 00:05:50.829 "immediate_data_pool_size": 16384, 00:05:50.829 "data_out_pool_size": 2048 00:05:50.829 } 00:05:50.829 } 00:05:50.829 ] 00:05:50.829 } 00:05:50.829 ] 00:05:50.829 } 00:05:50.829 01:52:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:50.829 01:52:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1454588 00:05:50.829 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1454588 ']' 00:05:50.829 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1454588 00:05:50.829 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:50.829 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.829 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1454588 00:05:51.088 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.088 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.088 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1454588' 00:05:51.088 killing process with pid 1454588 00:05:51.088 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1454588 00:05:51.088 01:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1454588 00:05:51.345 01:52:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1454730 00:05:51.345 01:52:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:51.345 01:52:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:56.607 01:53:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1454730 00:05:56.607 01:53:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1454730 ']' 00:05:56.607 01:53:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1454730 00:05:56.607 01:53:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:56.607 01:53:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.607 01:53:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1454730 00:05:56.607 01:53:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.607 01:53:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.607 01:53:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1454730' 00:05:56.607 killing process with pid 1454730 00:05:56.607 01:53:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1454730 00:05:56.607 01:53:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1454730 00:05:56.866 01:53:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:56.866 01:53:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:56.866 00:05:56.866 real 0m6.518s 00:05:56.866 user 0m6.079s 00:05:56.866 sys 0m0.704s 00:05:56.866 01:53:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.866 01:53:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.866 ************************************ 00:05:56.866 END TEST skip_rpc_with_json 00:05:56.866 ************************************ 00:05:56.866 01:53:02 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:56.866 01:53:02 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:56.866 01:53:02 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.866 01:53:02 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.866 01:53:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.866 ************************************ 00:05:56.866 START TEST skip_rpc_with_delay 00:05:56.866 ************************************ 00:05:56.866 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:56.866 01:53:02 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.866 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:56.866 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.867 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.867 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.867 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.867 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.867 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.867 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.867 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.867 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:56.867 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.867 [2024-07-14 01:53:02.491238] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:56.867 [2024-07-14 01:53:02.491343] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:56.867 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:56.867 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:56.867 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:56.867 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:56.867 00:05:56.867 real 0m0.068s 00:05:56.867 user 0m0.045s 00:05:56.867 sys 0m0.023s 00:05:56.867 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.867 01:53:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:56.867 ************************************ 00:05:56.867 END TEST skip_rpc_with_delay 00:05:56.867 ************************************ 00:05:56.867 01:53:02 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:56.867 01:53:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:56.867 01:53:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:56.867 01:53:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:56.867 01:53:02 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.867 01:53:02 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.867 01:53:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.867 ************************************ 00:05:56.867 START TEST exit_on_failed_rpc_init 00:05:56.867 ************************************ 00:05:56.867 01:53:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:56.867 01:53:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1455448 00:05:56.867 01:53:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.867 01:53:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1455448 00:05:56.867 01:53:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1455448 ']' 00:05:56.867 01:53:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.867 01:53:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.867 01:53:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.867 01:53:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.867 01:53:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:57.125 [2024-07-14 01:53:02.606320] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:57.125 [2024-07-14 01:53:02.606387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1455448 ] 00:05:57.125 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.125 [2024-07-14 01:53:02.663996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.125 [2024-07-14 01:53:02.753020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.383 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.383 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:57.383 01:53:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.383 01:53:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.383 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:57.383 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.383 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.383 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.383 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.383 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.383 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.383 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.383 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.383 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:57.383 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.383 [2024-07-14 01:53:03.062945] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:57.383 [2024-07-14 01:53:03.063022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1455463 ] 00:05:57.641 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.641 [2024-07-14 01:53:03.125821] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.641 [2024-07-14 01:53:03.219527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.641 [2024-07-14 01:53:03.219659] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:57.641 [2024-07-14 01:53:03.219691] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:57.641 [2024-07-14 01:53:03.219705] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.641 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:57.641 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:57.641 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:57.641 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:57.641 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:57.641 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:57.641 01:53:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:57.641 01:53:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1455448 00:05:57.641 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1455448 ']' 00:05:57.641 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1455448 00:05:57.641 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:57.641 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.641 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1455448 00:05:57.898 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.898 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.898 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1455448' 00:05:57.898 killing process with pid 1455448 00:05:57.898 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1455448 00:05:57.898 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1455448 00:05:58.156 00:05:58.156 real 0m1.191s 00:05:58.156 user 0m1.281s 00:05:58.156 sys 0m0.450s 00:05:58.156 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.156 01:53:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:58.156 ************************************ 00:05:58.156 END TEST exit_on_failed_rpc_init 00:05:58.156 ************************************ 00:05:58.156 01:53:03 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:58.156 01:53:03 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:58.156 00:05:58.156 real 0m13.477s 00:05:58.156 user 0m12.662s 00:05:58.156 sys 0m1.640s 00:05:58.156 01:53:03 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.156 01:53:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.156 ************************************ 00:05:58.156 END TEST skip_rpc 00:05:58.156 ************************************ 00:05:58.156 01:53:03 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.156 01:53:03 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:58.156 01:53:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.156 01:53:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.156 01:53:03 -- common/autotest_common.sh@10 -- # set +x 00:05:58.156 ************************************ 00:05:58.156 START TEST rpc_client 00:05:58.156 ************************************ 00:05:58.156 01:53:03 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:58.415 * Looking for test storage... 00:05:58.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:58.415 01:53:03 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:58.415 OK 00:05:58.415 01:53:03 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:58.415 00:05:58.415 real 0m0.069s 00:05:58.415 user 0m0.029s 00:05:58.415 sys 0m0.043s 00:05:58.415 01:53:03 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.415 01:53:03 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:58.415 ************************************ 00:05:58.415 END TEST rpc_client 00:05:58.415 ************************************ 00:05:58.415 01:53:03 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.415 01:53:03 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:58.415 01:53:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.415 01:53:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.415 01:53:03 -- common/autotest_common.sh@10 -- # set +x 00:05:58.415 ************************************ 00:05:58.415 START TEST json_config 00:05:58.415 ************************************ 00:05:58.415 01:53:03 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.415 01:53:03 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.415 01:53:03 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.415 01:53:03 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.415 01:53:03 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.415 01:53:03 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.415 01:53:03 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.415 01:53:03 json_config -- paths/export.sh@5 -- # export PATH 00:05:58.415 01:53:03 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@47 -- # : 0 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:58.415 01:53:03 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:58.415 INFO: JSON configuration test init 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:58.415 01:53:03 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.415 01:53:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:58.415 01:53:03 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.415 01:53:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.415 01:53:03 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:58.415 01:53:03 json_config -- json_config/common.sh@9 -- # local app=target 00:05:58.415 01:53:03 json_config -- json_config/common.sh@10 -- # shift 00:05:58.415 01:53:03 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:58.415 01:53:03 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:58.415 01:53:03 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:58.415 01:53:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.415 01:53:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.415 01:53:03 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1455701 00:05:58.416 01:53:03 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:58.416 01:53:03 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:58.416 Waiting for target to run... 00:05:58.416 01:53:03 json_config -- json_config/common.sh@25 -- # waitforlisten 1455701 /var/tmp/spdk_tgt.sock 00:05:58.416 01:53:03 json_config -- common/autotest_common.sh@829 -- # '[' -z 1455701 ']' 00:05:58.416 01:53:03 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:58.416 01:53:03 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.416 01:53:03 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:58.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:58.416 01:53:03 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.416 01:53:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.416 [2024-07-14 01:53:04.040836] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:58.416 [2024-07-14 01:53:04.040950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1455701 ] 00:05:58.416 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.981 [2024-07-14 01:53:04.547772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.981 [2024-07-14 01:53:04.629420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.545 01:53:05 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.545 01:53:05 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:59.545 01:53:05 json_config -- json_config/common.sh@26 -- # echo '' 00:05:59.545 00:05:59.545 01:53:05 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:59.545 01:53:05 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:59.545 01:53:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:59.545 01:53:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.545 01:53:05 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:59.545 01:53:05 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:59.545 01:53:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:59.545 01:53:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.545 01:53:05 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:59.545 01:53:05 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:59.545 01:53:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:02.823 01:53:08 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:02.823 01:53:08 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:02.823 01:53:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.823 01:53:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:02.824 01:53:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:02.824 01:53:08 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:02.824 01:53:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:02.824 01:53:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.824 01:53:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:02.824 01:53:08 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:02.824 01:53:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:03.082 MallocForNvmf0 00:06:03.082 01:53:08 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:03.082 01:53:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:03.345 MallocForNvmf1 00:06:03.345 01:53:08 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:03.345 01:53:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:03.654 [2024-07-14 01:53:09.177976] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.654 01:53:09 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:03.654 01:53:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:03.912 01:53:09 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:03.912 01:53:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:04.170 01:53:09 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:04.170 01:53:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:04.427 01:53:09 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:04.427 01:53:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:04.685 [2024-07-14 01:53:10.169295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:04.685 01:53:10 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:04.685 01:53:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.685 01:53:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.685 01:53:10 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:04.685 01:53:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.685 01:53:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.685 01:53:10 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:04.685 01:53:10 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:04.685 01:53:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:04.942 MallocBdevForConfigChangeCheck 00:06:04.942 01:53:10 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:04.942 01:53:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.942 01:53:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.942 01:53:10 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:04.942 01:53:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.200 01:53:10 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:05.200 INFO: shutting down applications... 00:06:05.200 01:53:10 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:05.200 01:53:10 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:05.200 01:53:10 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:05.200 01:53:10 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:07.099 Calling clear_iscsi_subsystem 00:06:07.099 Calling clear_nvmf_subsystem 00:06:07.099 Calling clear_nbd_subsystem 00:06:07.099 Calling clear_ublk_subsystem 00:06:07.099 Calling clear_vhost_blk_subsystem 00:06:07.099 Calling clear_vhost_scsi_subsystem 00:06:07.099 Calling clear_bdev_subsystem 00:06:07.099 01:53:12 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:07.099 01:53:12 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:07.099 01:53:12 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:07.099 01:53:12 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.099 01:53:12 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:07.099 01:53:12 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:07.358 01:53:12 json_config -- json_config/json_config.sh@345 -- # break 00:06:07.358 01:53:12 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:07.358 01:53:12 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:07.358 01:53:12 json_config -- json_config/common.sh@31 -- # local app=target 00:06:07.358 01:53:12 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:07.358 01:53:12 json_config -- json_config/common.sh@35 -- # [[ -n 1455701 ]] 00:06:07.358 01:53:12 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1455701 00:06:07.358 01:53:12 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:07.358 01:53:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.358 01:53:12 json_config -- json_config/common.sh@41 -- # kill -0 1455701 00:06:07.358 01:53:12 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:07.926 01:53:13 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:07.926 01:53:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.926 01:53:13 json_config -- json_config/common.sh@41 -- # kill -0 1455701 00:06:07.926 01:53:13 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:07.926 01:53:13 json_config -- json_config/common.sh@43 -- # break 00:06:07.926 01:53:13 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:07.926 01:53:13 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:07.926 SPDK target shutdown done 00:06:07.926 01:53:13 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:07.926 INFO: relaunching applications... 00:06:07.926 01:53:13 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.926 01:53:13 json_config -- json_config/common.sh@9 -- # local app=target 00:06:07.926 01:53:13 json_config -- json_config/common.sh@10 -- # shift 00:06:07.926 01:53:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:07.926 01:53:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:07.926 01:53:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:07.926 01:53:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:07.926 01:53:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:07.926 01:53:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1457006 00:06:07.926 01:53:13 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.926 01:53:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:07.926 Waiting for target to run... 00:06:07.926 01:53:13 json_config -- json_config/common.sh@25 -- # waitforlisten 1457006 /var/tmp/spdk_tgt.sock 00:06:07.926 01:53:13 json_config -- common/autotest_common.sh@829 -- # '[' -z 1457006 ']' 00:06:07.926 01:53:13 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:07.926 01:53:13 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.926 01:53:13 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:07.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:07.926 01:53:13 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.926 01:53:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.926 [2024-07-14 01:53:13.441745] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:07.926 [2024-07-14 01:53:13.441843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457006 ] 00:06:07.926 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.492 [2024-07-14 01:53:13.944709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.492 [2024-07-14 01:53:14.026812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.775 [2024-07-14 01:53:17.059771] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.775 [2024-07-14 01:53:17.092219] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:12.339 01:53:17 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.339 01:53:17 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:12.339 01:53:17 json_config -- json_config/common.sh@26 -- # echo '' 00:06:12.339 00:06:12.339 01:53:17 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:12.339 01:53:17 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:12.340 INFO: Checking if target configuration is the same... 00:06:12.340 01:53:17 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.340 01:53:17 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:12.340 01:53:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.340 + '[' 2 -ne 2 ']' 00:06:12.340 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:12.340 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:12.340 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:12.340 +++ basename /dev/fd/62 00:06:12.340 ++ mktemp /tmp/62.XXX 00:06:12.340 + tmp_file_1=/tmp/62.wLq 00:06:12.340 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.340 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:12.340 + tmp_file_2=/tmp/spdk_tgt_config.json.5jX 00:06:12.340 + ret=0 00:06:12.340 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.597 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.855 + diff -u /tmp/62.wLq /tmp/spdk_tgt_config.json.5jX 00:06:12.855 + echo 'INFO: JSON config files are the same' 00:06:12.855 INFO: JSON config files are the same 00:06:12.855 + rm /tmp/62.wLq /tmp/spdk_tgt_config.json.5jX 00:06:12.855 + exit 0 00:06:12.855 01:53:18 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:12.855 01:53:18 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:12.855 INFO: changing configuration and checking if this can be detected... 00:06:12.855 01:53:18 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:12.855 01:53:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:13.113 01:53:18 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.113 01:53:18 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:13.113 01:53:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:13.113 + '[' 2 -ne 2 ']' 00:06:13.113 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:13.113 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:13.113 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:13.113 +++ basename /dev/fd/62 00:06:13.113 ++ mktemp /tmp/62.XXX 00:06:13.113 + tmp_file_1=/tmp/62.Yyt 00:06:13.113 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.113 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:13.113 + tmp_file_2=/tmp/spdk_tgt_config.json.FwG 00:06:13.113 + ret=0 00:06:13.113 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:13.372 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:13.372 + diff -u /tmp/62.Yyt /tmp/spdk_tgt_config.json.FwG 00:06:13.372 + ret=1 00:06:13.372 + echo '=== Start of file: /tmp/62.Yyt ===' 00:06:13.372 + cat /tmp/62.Yyt 00:06:13.372 + echo '=== End of file: /tmp/62.Yyt ===' 00:06:13.372 + echo '' 00:06:13.372 + echo '=== Start of file: /tmp/spdk_tgt_config.json.FwG ===' 00:06:13.372 + cat /tmp/spdk_tgt_config.json.FwG 00:06:13.372 + echo '=== End of file: /tmp/spdk_tgt_config.json.FwG ===' 00:06:13.372 + echo '' 00:06:13.372 + rm /tmp/62.Yyt /tmp/spdk_tgt_config.json.FwG 00:06:13.372 + exit 1 00:06:13.372 01:53:18 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:13.372 INFO: configuration change detected. 00:06:13.372 01:53:18 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:13.372 01:53:18 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:13.372 01:53:18 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.372 01:53:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.372 01:53:18 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:13.372 01:53:18 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:13.372 01:53:18 json_config -- json_config/json_config.sh@317 -- # [[ -n 1457006 ]] 00:06:13.372 01:53:18 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:13.372 01:53:18 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:13.372 01:53:18 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.372 01:53:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.372 01:53:18 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:13.372 01:53:18 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:13.372 01:53:18 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:13.372 01:53:18 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:13.372 01:53:18 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:13.372 01:53:18 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:13.372 01:53:18 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:13.372 01:53:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.372 01:53:19 json_config -- json_config/json_config.sh@323 -- # killprocess 1457006 00:06:13.372 01:53:19 json_config -- common/autotest_common.sh@948 -- # '[' -z 1457006 ']' 00:06:13.372 01:53:19 json_config -- common/autotest_common.sh@952 -- # kill -0 1457006 00:06:13.372 01:53:19 json_config -- common/autotest_common.sh@953 -- # uname 00:06:13.372 01:53:19 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.372 01:53:19 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1457006 00:06:13.372 01:53:19 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.372 01:53:19 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.372 01:53:19 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1457006' 00:06:13.372 killing process with pid 1457006 00:06:13.372 01:53:19 json_config -- common/autotest_common.sh@967 -- # kill 1457006 00:06:13.372 01:53:19 json_config -- common/autotest_common.sh@972 -- # wait 1457006 00:06:15.273 01:53:20 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.273 01:53:20 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:15.273 01:53:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:15.273 01:53:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.273 01:53:20 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:15.273 01:53:20 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:15.273 INFO: Success 00:06:15.273 00:06:15.273 real 0m16.741s 00:06:15.273 user 0m18.541s 00:06:15.273 sys 0m2.216s 00:06:15.273 01:53:20 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.273 01:53:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.273 ************************************ 00:06:15.273 END TEST json_config 00:06:15.273 ************************************ 00:06:15.273 01:53:20 -- common/autotest_common.sh@1142 -- # return 0 00:06:15.273 01:53:20 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:15.273 01:53:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.273 01:53:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.273 01:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:15.273 ************************************ 00:06:15.273 START TEST json_config_extra_key 00:06:15.273 ************************************ 00:06:15.273 01:53:20 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:15.273 01:53:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.273 01:53:20 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.273 01:53:20 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.273 01:53:20 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.273 01:53:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.273 01:53:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.273 01:53:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.273 01:53:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:15.273 01:53:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.273 01:53:20 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.273 01:53:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:15.273 01:53:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:15.273 01:53:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:15.273 01:53:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:15.273 01:53:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:15.273 01:53:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:15.273 01:53:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:15.273 01:53:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:15.273 01:53:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:15.273 01:53:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:15.273 01:53:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:15.273 INFO: launching applications... 00:06:15.273 01:53:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:15.273 01:53:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:15.273 01:53:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:15.273 01:53:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:15.273 01:53:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:15.273 01:53:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:15.273 01:53:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:15.273 01:53:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:15.273 01:53:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1457939 00:06:15.273 01:53:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:15.273 Waiting for target to run... 00:06:15.273 01:53:20 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:15.273 01:53:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1457939 /var/tmp/spdk_tgt.sock 00:06:15.274 01:53:20 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1457939 ']' 00:06:15.274 01:53:20 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:15.274 01:53:20 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.274 01:53:20 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:15.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:15.274 01:53:20 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.274 01:53:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:15.274 [2024-07-14 01:53:20.821891] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:15.274 [2024-07-14 01:53:20.821994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457939 ] 00:06:15.274 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.841 [2024-07-14 01:53:21.304521] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.841 [2024-07-14 01:53:21.386635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.408 01:53:21 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.408 01:53:21 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:16.409 01:53:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:16.409 00:06:16.409 01:53:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:16.409 INFO: shutting down applications... 00:06:16.409 01:53:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:16.409 01:53:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:16.409 01:53:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:16.409 01:53:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1457939 ]] 00:06:16.409 01:53:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1457939 00:06:16.409 01:53:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:16.409 01:53:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.409 01:53:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1457939 00:06:16.409 01:53:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:16.667 01:53:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:16.667 01:53:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.667 01:53:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1457939 00:06:16.667 01:53:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:16.667 01:53:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:16.667 01:53:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:16.667 01:53:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:16.667 SPDK target shutdown done 00:06:16.667 01:53:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:16.667 Success 00:06:16.667 00:06:16.667 real 0m1.592s 00:06:16.667 user 0m1.468s 00:06:16.667 sys 0m0.576s 00:06:16.667 01:53:22 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.667 01:53:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:16.667 ************************************ 00:06:16.667 END TEST json_config_extra_key 00:06:16.667 ************************************ 00:06:16.667 01:53:22 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.668 01:53:22 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:16.668 01:53:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.668 01:53:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.668 01:53:22 -- common/autotest_common.sh@10 -- # set +x 00:06:16.668 ************************************ 00:06:16.668 START TEST alias_rpc 00:06:16.668 ************************************ 00:06:16.668 01:53:22 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:16.926 * Looking for test storage... 00:06:16.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:16.926 01:53:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:16.926 01:53:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1458236 00:06:16.926 01:53:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:16.926 01:53:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1458236 00:06:16.926 01:53:22 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1458236 ']' 00:06:16.926 01:53:22 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.926 01:53:22 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.926 01:53:22 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.926 01:53:22 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.926 01:53:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.926 [2024-07-14 01:53:22.459432] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:16.926 [2024-07-14 01:53:22.459530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458236 ] 00:06:16.926 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.926 [2024-07-14 01:53:22.517434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.926 [2024-07-14 01:53:22.605581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.185 01:53:22 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.185 01:53:22 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:17.185 01:53:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:17.443 01:53:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1458236 00:06:17.443 01:53:23 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1458236 ']' 00:06:17.443 01:53:23 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1458236 00:06:17.443 01:53:23 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:17.443 01:53:23 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.443 01:53:23 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1458236 00:06:17.707 01:53:23 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.707 01:53:23 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.707 01:53:23 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1458236' 00:06:17.707 killing process with pid 1458236 00:06:17.707 01:53:23 alias_rpc -- common/autotest_common.sh@967 -- # kill 1458236 00:06:17.707 01:53:23 alias_rpc -- common/autotest_common.sh@972 -- # wait 1458236 00:06:17.966 00:06:17.966 real 0m1.207s 00:06:17.966 user 0m1.278s 00:06:17.966 sys 0m0.423s 00:06:17.966 01:53:23 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.966 01:53:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.966 ************************************ 00:06:17.966 END TEST alias_rpc 00:06:17.966 ************************************ 00:06:17.966 01:53:23 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.966 01:53:23 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:17.966 01:53:23 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:17.966 01:53:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.966 01:53:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.966 01:53:23 -- common/autotest_common.sh@10 -- # set +x 00:06:17.966 ************************************ 00:06:17.966 START TEST spdkcli_tcp 00:06:17.966 ************************************ 00:06:17.966 01:53:23 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:18.224 * Looking for test storage... 00:06:18.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:18.224 01:53:23 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:18.224 01:53:23 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:18.224 01:53:23 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:18.224 01:53:23 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:18.224 01:53:23 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:18.224 01:53:23 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:18.224 01:53:23 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:18.224 01:53:23 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:18.224 01:53:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.224 01:53:23 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1458427 00:06:18.224 01:53:23 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:18.224 01:53:23 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1458427 00:06:18.224 01:53:23 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1458427 ']' 00:06:18.224 01:53:23 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.224 01:53:23 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.224 01:53:23 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.224 01:53:23 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.224 01:53:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.224 [2024-07-14 01:53:23.723561] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:18.224 [2024-07-14 01:53:23.723652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458427 ] 00:06:18.224 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.224 [2024-07-14 01:53:23.780728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.224 [2024-07-14 01:53:23.865913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.224 [2024-07-14 01:53:23.865917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.482 01:53:24 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.482 01:53:24 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:18.482 01:53:24 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1458443 00:06:18.482 01:53:24 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:18.482 01:53:24 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:18.741 [ 00:06:18.741 "bdev_malloc_delete", 00:06:18.741 "bdev_malloc_create", 00:06:18.741 "bdev_null_resize", 00:06:18.741 "bdev_null_delete", 00:06:18.741 "bdev_null_create", 00:06:18.741 "bdev_nvme_cuse_unregister", 00:06:18.741 "bdev_nvme_cuse_register", 00:06:18.741 "bdev_opal_new_user", 00:06:18.741 "bdev_opal_set_lock_state", 00:06:18.741 "bdev_opal_delete", 00:06:18.741 "bdev_opal_get_info", 00:06:18.741 "bdev_opal_create", 00:06:18.741 "bdev_nvme_opal_revert", 00:06:18.741 "bdev_nvme_opal_init", 00:06:18.741 "bdev_nvme_send_cmd", 00:06:18.741 "bdev_nvme_get_path_iostat", 00:06:18.741 "bdev_nvme_get_mdns_discovery_info", 00:06:18.741 "bdev_nvme_stop_mdns_discovery", 00:06:18.741 "bdev_nvme_start_mdns_discovery", 00:06:18.741 "bdev_nvme_set_multipath_policy", 00:06:18.741 "bdev_nvme_set_preferred_path", 00:06:18.741 "bdev_nvme_get_io_paths", 00:06:18.741 "bdev_nvme_remove_error_injection", 00:06:18.741 "bdev_nvme_add_error_injection", 00:06:18.741 "bdev_nvme_get_discovery_info", 00:06:18.741 "bdev_nvme_stop_discovery", 00:06:18.741 "bdev_nvme_start_discovery", 00:06:18.741 "bdev_nvme_get_controller_health_info", 00:06:18.741 "bdev_nvme_disable_controller", 00:06:18.741 "bdev_nvme_enable_controller", 00:06:18.741 "bdev_nvme_reset_controller", 00:06:18.741 "bdev_nvme_get_transport_statistics", 00:06:18.741 "bdev_nvme_apply_firmware", 00:06:18.741 "bdev_nvme_detach_controller", 00:06:18.741 "bdev_nvme_get_controllers", 00:06:18.741 "bdev_nvme_attach_controller", 00:06:18.741 "bdev_nvme_set_hotplug", 00:06:18.741 "bdev_nvme_set_options", 00:06:18.741 "bdev_passthru_delete", 00:06:18.741 "bdev_passthru_create", 00:06:18.741 "bdev_lvol_set_parent_bdev", 00:06:18.741 "bdev_lvol_set_parent", 00:06:18.741 "bdev_lvol_check_shallow_copy", 00:06:18.741 "bdev_lvol_start_shallow_copy", 00:06:18.741 "bdev_lvol_grow_lvstore", 00:06:18.741 "bdev_lvol_get_lvols", 00:06:18.741 "bdev_lvol_get_lvstores", 00:06:18.741 "bdev_lvol_delete", 00:06:18.741 "bdev_lvol_set_read_only", 00:06:18.741 "bdev_lvol_resize", 00:06:18.741 "bdev_lvol_decouple_parent", 00:06:18.741 "bdev_lvol_inflate", 00:06:18.741 "bdev_lvol_rename", 00:06:18.741 "bdev_lvol_clone_bdev", 00:06:18.741 "bdev_lvol_clone", 00:06:18.741 "bdev_lvol_snapshot", 00:06:18.741 "bdev_lvol_create", 00:06:18.741 "bdev_lvol_delete_lvstore", 00:06:18.741 "bdev_lvol_rename_lvstore", 00:06:18.741 "bdev_lvol_create_lvstore", 00:06:18.741 "bdev_raid_set_options", 00:06:18.741 "bdev_raid_remove_base_bdev", 00:06:18.741 "bdev_raid_add_base_bdev", 00:06:18.741 "bdev_raid_delete", 00:06:18.741 "bdev_raid_create", 00:06:18.741 "bdev_raid_get_bdevs", 00:06:18.741 "bdev_error_inject_error", 00:06:18.741 "bdev_error_delete", 00:06:18.741 "bdev_error_create", 00:06:18.741 "bdev_split_delete", 00:06:18.741 "bdev_split_create", 00:06:18.741 "bdev_delay_delete", 00:06:18.741 "bdev_delay_create", 00:06:18.741 "bdev_delay_update_latency", 00:06:18.741 "bdev_zone_block_delete", 00:06:18.741 "bdev_zone_block_create", 00:06:18.741 "blobfs_create", 00:06:18.741 "blobfs_detect", 00:06:18.741 "blobfs_set_cache_size", 00:06:18.741 "bdev_aio_delete", 00:06:18.741 "bdev_aio_rescan", 00:06:18.741 "bdev_aio_create", 00:06:18.741 "bdev_ftl_set_property", 00:06:18.741 "bdev_ftl_get_properties", 00:06:18.741 "bdev_ftl_get_stats", 00:06:18.741 "bdev_ftl_unmap", 00:06:18.741 "bdev_ftl_unload", 00:06:18.741 "bdev_ftl_delete", 00:06:18.741 "bdev_ftl_load", 00:06:18.741 "bdev_ftl_create", 00:06:18.741 "bdev_virtio_attach_controller", 00:06:18.741 "bdev_virtio_scsi_get_devices", 00:06:18.741 "bdev_virtio_detach_controller", 00:06:18.741 "bdev_virtio_blk_set_hotplug", 00:06:18.741 "bdev_iscsi_delete", 00:06:18.741 "bdev_iscsi_create", 00:06:18.741 "bdev_iscsi_set_options", 00:06:18.741 "accel_error_inject_error", 00:06:18.741 "ioat_scan_accel_module", 00:06:18.741 "dsa_scan_accel_module", 00:06:18.741 "iaa_scan_accel_module", 00:06:18.741 "vfu_virtio_create_scsi_endpoint", 00:06:18.741 "vfu_virtio_scsi_remove_target", 00:06:18.741 "vfu_virtio_scsi_add_target", 00:06:18.741 "vfu_virtio_create_blk_endpoint", 00:06:18.741 "vfu_virtio_delete_endpoint", 00:06:18.741 "keyring_file_remove_key", 00:06:18.741 "keyring_file_add_key", 00:06:18.741 "keyring_linux_set_options", 00:06:18.741 "iscsi_get_histogram", 00:06:18.741 "iscsi_enable_histogram", 00:06:18.741 "iscsi_set_options", 00:06:18.741 "iscsi_get_auth_groups", 00:06:18.741 "iscsi_auth_group_remove_secret", 00:06:18.741 "iscsi_auth_group_add_secret", 00:06:18.741 "iscsi_delete_auth_group", 00:06:18.741 "iscsi_create_auth_group", 00:06:18.741 "iscsi_set_discovery_auth", 00:06:18.741 "iscsi_get_options", 00:06:18.741 "iscsi_target_node_request_logout", 00:06:18.741 "iscsi_target_node_set_redirect", 00:06:18.741 "iscsi_target_node_set_auth", 00:06:18.741 "iscsi_target_node_add_lun", 00:06:18.741 "iscsi_get_stats", 00:06:18.741 "iscsi_get_connections", 00:06:18.741 "iscsi_portal_group_set_auth", 00:06:18.741 "iscsi_start_portal_group", 00:06:18.741 "iscsi_delete_portal_group", 00:06:18.741 "iscsi_create_portal_group", 00:06:18.741 "iscsi_get_portal_groups", 00:06:18.741 "iscsi_delete_target_node", 00:06:18.741 "iscsi_target_node_remove_pg_ig_maps", 00:06:18.741 "iscsi_target_node_add_pg_ig_maps", 00:06:18.741 "iscsi_create_target_node", 00:06:18.741 "iscsi_get_target_nodes", 00:06:18.741 "iscsi_delete_initiator_group", 00:06:18.741 "iscsi_initiator_group_remove_initiators", 00:06:18.741 "iscsi_initiator_group_add_initiators", 00:06:18.741 "iscsi_create_initiator_group", 00:06:18.741 "iscsi_get_initiator_groups", 00:06:18.741 "nvmf_set_crdt", 00:06:18.741 "nvmf_set_config", 00:06:18.741 "nvmf_set_max_subsystems", 00:06:18.741 "nvmf_stop_mdns_prr", 00:06:18.741 "nvmf_publish_mdns_prr", 00:06:18.741 "nvmf_subsystem_get_listeners", 00:06:18.741 "nvmf_subsystem_get_qpairs", 00:06:18.741 "nvmf_subsystem_get_controllers", 00:06:18.741 "nvmf_get_stats", 00:06:18.741 "nvmf_get_transports", 00:06:18.741 "nvmf_create_transport", 00:06:18.741 "nvmf_get_targets", 00:06:18.741 "nvmf_delete_target", 00:06:18.741 "nvmf_create_target", 00:06:18.741 "nvmf_subsystem_allow_any_host", 00:06:18.741 "nvmf_subsystem_remove_host", 00:06:18.741 "nvmf_subsystem_add_host", 00:06:18.741 "nvmf_ns_remove_host", 00:06:18.741 "nvmf_ns_add_host", 00:06:18.741 "nvmf_subsystem_remove_ns", 00:06:18.741 "nvmf_subsystem_add_ns", 00:06:18.741 "nvmf_subsystem_listener_set_ana_state", 00:06:18.741 "nvmf_discovery_get_referrals", 00:06:18.741 "nvmf_discovery_remove_referral", 00:06:18.741 "nvmf_discovery_add_referral", 00:06:18.741 "nvmf_subsystem_remove_listener", 00:06:18.741 "nvmf_subsystem_add_listener", 00:06:18.741 "nvmf_delete_subsystem", 00:06:18.741 "nvmf_create_subsystem", 00:06:18.741 "nvmf_get_subsystems", 00:06:18.741 "env_dpdk_get_mem_stats", 00:06:18.741 "nbd_get_disks", 00:06:18.741 "nbd_stop_disk", 00:06:18.741 "nbd_start_disk", 00:06:18.741 "ublk_recover_disk", 00:06:18.741 "ublk_get_disks", 00:06:18.741 "ublk_stop_disk", 00:06:18.741 "ublk_start_disk", 00:06:18.741 "ublk_destroy_target", 00:06:18.741 "ublk_create_target", 00:06:18.741 "virtio_blk_create_transport", 00:06:18.741 "virtio_blk_get_transports", 00:06:18.741 "vhost_controller_set_coalescing", 00:06:18.741 "vhost_get_controllers", 00:06:18.741 "vhost_delete_controller", 00:06:18.741 "vhost_create_blk_controller", 00:06:18.741 "vhost_scsi_controller_remove_target", 00:06:18.741 "vhost_scsi_controller_add_target", 00:06:18.741 "vhost_start_scsi_controller", 00:06:18.741 "vhost_create_scsi_controller", 00:06:18.741 "thread_set_cpumask", 00:06:18.741 "framework_get_governor", 00:06:18.741 "framework_get_scheduler", 00:06:18.741 "framework_set_scheduler", 00:06:18.741 "framework_get_reactors", 00:06:18.741 "thread_get_io_channels", 00:06:18.741 "thread_get_pollers", 00:06:18.741 "thread_get_stats", 00:06:18.741 "framework_monitor_context_switch", 00:06:18.741 "spdk_kill_instance", 00:06:18.741 "log_enable_timestamps", 00:06:18.741 "log_get_flags", 00:06:18.741 "log_clear_flag", 00:06:18.741 "log_set_flag", 00:06:18.741 "log_get_level", 00:06:18.741 "log_set_level", 00:06:18.741 "log_get_print_level", 00:06:18.741 "log_set_print_level", 00:06:18.741 "framework_enable_cpumask_locks", 00:06:18.741 "framework_disable_cpumask_locks", 00:06:18.741 "framework_wait_init", 00:06:18.741 "framework_start_init", 00:06:18.741 "scsi_get_devices", 00:06:18.741 "bdev_get_histogram", 00:06:18.741 "bdev_enable_histogram", 00:06:18.741 "bdev_set_qos_limit", 00:06:18.741 "bdev_set_qd_sampling_period", 00:06:18.741 "bdev_get_bdevs", 00:06:18.741 "bdev_reset_iostat", 00:06:18.741 "bdev_get_iostat", 00:06:18.741 "bdev_examine", 00:06:18.741 "bdev_wait_for_examine", 00:06:18.741 "bdev_set_options", 00:06:18.741 "notify_get_notifications", 00:06:18.741 "notify_get_types", 00:06:18.741 "accel_get_stats", 00:06:18.741 "accel_set_options", 00:06:18.741 "accel_set_driver", 00:06:18.741 "accel_crypto_key_destroy", 00:06:18.741 "accel_crypto_keys_get", 00:06:18.741 "accel_crypto_key_create", 00:06:18.741 "accel_assign_opc", 00:06:18.741 "accel_get_module_info", 00:06:18.741 "accel_get_opc_assignments", 00:06:18.741 "vmd_rescan", 00:06:18.741 "vmd_remove_device", 00:06:18.741 "vmd_enable", 00:06:18.741 "sock_get_default_impl", 00:06:18.741 "sock_set_default_impl", 00:06:18.741 "sock_impl_set_options", 00:06:18.741 "sock_impl_get_options", 00:06:18.741 "iobuf_get_stats", 00:06:18.741 "iobuf_set_options", 00:06:18.741 "keyring_get_keys", 00:06:18.741 "framework_get_pci_devices", 00:06:18.741 "framework_get_config", 00:06:18.741 "framework_get_subsystems", 00:06:18.741 "vfu_tgt_set_base_path", 00:06:18.741 "trace_get_info", 00:06:18.741 "trace_get_tpoint_group_mask", 00:06:18.741 "trace_disable_tpoint_group", 00:06:18.741 "trace_enable_tpoint_group", 00:06:18.741 "trace_clear_tpoint_mask", 00:06:18.741 "trace_set_tpoint_mask", 00:06:18.741 "spdk_get_version", 00:06:18.741 "rpc_get_methods" 00:06:18.741 ] 00:06:18.742 01:53:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:18.742 01:53:24 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.742 01:53:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.742 01:53:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:18.742 01:53:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1458427 00:06:18.742 01:53:24 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1458427 ']' 00:06:18.742 01:53:24 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1458427 00:06:18.742 01:53:24 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:18.742 01:53:24 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.742 01:53:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1458427 00:06:18.742 01:53:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.742 01:53:24 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.742 01:53:24 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1458427' 00:06:18.742 killing process with pid 1458427 00:06:18.742 01:53:24 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1458427 00:06:18.742 01:53:24 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1458427 00:06:19.344 00:06:19.344 real 0m1.203s 00:06:19.344 user 0m2.136s 00:06:19.344 sys 0m0.443s 00:06:19.344 01:53:24 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.344 01:53:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.344 ************************************ 00:06:19.344 END TEST spdkcli_tcp 00:06:19.344 ************************************ 00:06:19.344 01:53:24 -- common/autotest_common.sh@1142 -- # return 0 00:06:19.344 01:53:24 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:19.344 01:53:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.344 01:53:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.344 01:53:24 -- common/autotest_common.sh@10 -- # set +x 00:06:19.344 ************************************ 00:06:19.344 START TEST dpdk_mem_utility 00:06:19.344 ************************************ 00:06:19.344 01:53:24 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:19.344 * Looking for test storage... 00:06:19.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:19.344 01:53:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:19.344 01:53:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1458636 00:06:19.344 01:53:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.344 01:53:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1458636 00:06:19.344 01:53:24 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1458636 ']' 00:06:19.344 01:53:24 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.344 01:53:24 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.344 01:53:24 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.344 01:53:24 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.344 01:53:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.344 [2024-07-14 01:53:24.965070] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:19.344 [2024-07-14 01:53:24.965146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458636 ] 00:06:19.344 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.344 [2024-07-14 01:53:25.024108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.603 [2024-07-14 01:53:25.110124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.861 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.861 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:19.862 01:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:19.862 01:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:19.862 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.862 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.862 { 00:06:19.862 "filename": "/tmp/spdk_mem_dump.txt" 00:06:19.862 } 00:06:19.862 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.862 01:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:19.862 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:19.862 1 heaps totaling size 814.000000 MiB 00:06:19.862 size: 814.000000 MiB heap id: 0 00:06:19.862 end heaps---------- 00:06:19.862 8 mempools totaling size 598.116089 MiB 00:06:19.862 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:19.862 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:19.862 size: 84.521057 MiB name: bdev_io_1458636 00:06:19.862 size: 51.011292 MiB name: evtpool_1458636 00:06:19.862 size: 50.003479 MiB name: msgpool_1458636 00:06:19.862 size: 21.763794 MiB name: PDU_Pool 00:06:19.862 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:19.862 size: 0.026123 MiB name: Session_Pool 00:06:19.862 end mempools------- 00:06:19.862 6 memzones totaling size 4.142822 MiB 00:06:19.862 size: 1.000366 MiB name: RG_ring_0_1458636 00:06:19.862 size: 1.000366 MiB name: RG_ring_1_1458636 00:06:19.862 size: 1.000366 MiB name: RG_ring_4_1458636 00:06:19.862 size: 1.000366 MiB name: RG_ring_5_1458636 00:06:19.862 size: 0.125366 MiB name: RG_ring_2_1458636 00:06:19.862 size: 0.015991 MiB name: RG_ring_3_1458636 00:06:19.862 end memzones------- 00:06:19.862 01:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:19.862 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:19.862 list of free elements. size: 12.519348 MiB 00:06:19.862 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:19.862 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:19.862 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:19.862 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:19.862 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:19.862 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:19.862 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:19.862 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:19.862 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:19.862 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:19.862 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:19.862 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:19.862 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:19.862 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:19.862 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:19.862 list of standard malloc elements. size: 199.218079 MiB 00:06:19.862 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:19.862 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:19.862 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:19.862 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:19.862 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:19.862 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:19.862 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:19.862 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:19.862 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:19.862 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:19.862 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:19.862 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:19.862 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:19.862 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:19.862 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:19.862 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:19.862 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:19.862 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:19.862 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:19.862 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:19.862 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:19.862 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:19.862 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:19.862 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:19.862 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:19.862 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:19.862 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:19.862 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:19.862 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:19.862 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:19.862 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:19.862 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:19.862 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:19.862 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:19.862 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:19.862 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:19.862 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:19.862 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:19.862 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:19.862 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:19.862 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:19.862 list of memzone associated elements. size: 602.262573 MiB 00:06:19.862 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:19.862 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:19.862 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:19.862 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:19.862 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:19.862 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1458636_0 00:06:19.862 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:19.862 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1458636_0 00:06:19.862 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:19.862 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1458636_0 00:06:19.862 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:19.862 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:19.862 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:19.862 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:19.862 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:19.862 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1458636 00:06:19.862 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:19.862 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1458636 00:06:19.862 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:19.862 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1458636 00:06:19.862 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:19.862 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:19.862 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:19.862 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:19.862 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:19.862 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:19.862 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:19.862 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:19.862 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:19.862 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1458636 00:06:19.862 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:19.862 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1458636 00:06:19.862 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:19.862 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1458636 00:06:19.862 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:19.862 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1458636 00:06:19.862 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:19.862 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1458636 00:06:19.862 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:19.862 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:19.862 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:19.862 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:19.862 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:19.862 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:19.862 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:19.862 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1458636 00:06:19.862 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:19.862 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:19.862 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:19.862 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:19.862 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:19.862 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1458636 00:06:19.862 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:19.862 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:19.862 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:19.862 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1458636 00:06:19.862 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:19.862 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1458636 00:06:19.863 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:19.863 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:19.863 01:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:19.863 01:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1458636 00:06:19.863 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1458636 ']' 00:06:19.863 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1458636 00:06:19.863 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:19.863 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.863 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1458636 00:06:19.863 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.863 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.863 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1458636' 00:06:19.863 killing process with pid 1458636 00:06:19.863 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1458636 00:06:19.863 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1458636 00:06:20.429 00:06:20.429 real 0m1.054s 00:06:20.429 user 0m1.030s 00:06:20.429 sys 0m0.393s 00:06:20.429 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.429 01:53:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:20.429 ************************************ 00:06:20.429 END TEST dpdk_mem_utility 00:06:20.429 ************************************ 00:06:20.429 01:53:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.429 01:53:25 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:20.429 01:53:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.429 01:53:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.429 01:53:25 -- common/autotest_common.sh@10 -- # set +x 00:06:20.429 ************************************ 00:06:20.429 START TEST event 00:06:20.429 ************************************ 00:06:20.429 01:53:25 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:20.429 * Looking for test storage... 00:06:20.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:20.429 01:53:26 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:20.429 01:53:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:20.429 01:53:26 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:20.429 01:53:26 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:20.429 01:53:26 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.429 01:53:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.429 ************************************ 00:06:20.429 START TEST event_perf 00:06:20.429 ************************************ 00:06:20.429 01:53:26 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:20.429 Running I/O for 1 seconds...[2024-07-14 01:53:26.054050] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:20.429 [2024-07-14 01:53:26.054119] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458825 ] 00:06:20.429 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.429 [2024-07-14 01:53:26.115017] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.687 [2024-07-14 01:53:26.206026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.687 [2024-07-14 01:53:26.206050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.687 [2024-07-14 01:53:26.206109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.687 [2024-07-14 01:53:26.206112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.618 Running I/O for 1 seconds... 00:06:21.618 lcore 0: 238620 00:06:21.618 lcore 1: 238618 00:06:21.618 lcore 2: 238619 00:06:21.618 lcore 3: 238618 00:06:21.618 done. 00:06:21.618 00:06:21.618 real 0m1.245s 00:06:21.618 user 0m4.155s 00:06:21.618 sys 0m0.086s 00:06:21.618 01:53:27 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.618 01:53:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.618 ************************************ 00:06:21.618 END TEST event_perf 00:06:21.618 ************************************ 00:06:21.618 01:53:27 event -- common/autotest_common.sh@1142 -- # return 0 00:06:21.618 01:53:27 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:21.618 01:53:27 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:21.618 01:53:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.875 01:53:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.875 ************************************ 00:06:21.875 START TEST event_reactor 00:06:21.875 ************************************ 00:06:21.875 01:53:27 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:21.875 [2024-07-14 01:53:27.345645] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:21.875 [2024-07-14 01:53:27.345716] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458986 ] 00:06:21.875 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.875 [2024-07-14 01:53:27.409559] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.875 [2024-07-14 01:53:27.500724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.271 test_start 00:06:23.271 oneshot 00:06:23.271 tick 100 00:06:23.271 tick 100 00:06:23.271 tick 250 00:06:23.271 tick 100 00:06:23.271 tick 100 00:06:23.271 tick 100 00:06:23.271 tick 250 00:06:23.271 tick 500 00:06:23.271 tick 100 00:06:23.271 tick 100 00:06:23.271 tick 250 00:06:23.271 tick 100 00:06:23.271 tick 100 00:06:23.271 test_end 00:06:23.271 00:06:23.271 real 0m1.250s 00:06:23.271 user 0m1.172s 00:06:23.271 sys 0m0.074s 00:06:23.271 01:53:28 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.271 01:53:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:23.271 ************************************ 00:06:23.271 END TEST event_reactor 00:06:23.271 ************************************ 00:06:23.271 01:53:28 event -- common/autotest_common.sh@1142 -- # return 0 00:06:23.271 01:53:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:23.271 01:53:28 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:23.271 01:53:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.271 01:53:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.271 ************************************ 00:06:23.271 START TEST event_reactor_perf 00:06:23.271 ************************************ 00:06:23.271 01:53:28 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:23.271 [2024-07-14 01:53:28.650347] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:23.271 [2024-07-14 01:53:28.650417] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459139 ] 00:06:23.271 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.271 [2024-07-14 01:53:28.715413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.271 [2024-07-14 01:53:28.805928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.203 test_start 00:06:24.203 test_end 00:06:24.203 Performance: 356469 events per second 00:06:24.203 00:06:24.203 real 0m1.252s 00:06:24.203 user 0m1.160s 00:06:24.203 sys 0m0.087s 00:06:24.203 01:53:29 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.203 01:53:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.203 ************************************ 00:06:24.203 END TEST event_reactor_perf 00:06:24.203 ************************************ 00:06:24.461 01:53:29 event -- common/autotest_common.sh@1142 -- # return 0 00:06:24.461 01:53:29 event -- event/event.sh@49 -- # uname -s 00:06:24.461 01:53:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:24.461 01:53:29 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:24.461 01:53:29 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.461 01:53:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.461 01:53:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.461 ************************************ 00:06:24.461 START TEST event_scheduler 00:06:24.461 ************************************ 00:06:24.461 01:53:29 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:24.461 * Looking for test storage... 00:06:24.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:24.461 01:53:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:24.461 01:53:30 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1459325 00:06:24.461 01:53:30 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:24.461 01:53:30 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.461 01:53:30 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1459325 00:06:24.461 01:53:30 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1459325 ']' 00:06:24.461 01:53:30 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.461 01:53:30 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.461 01:53:30 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.461 01:53:30 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.461 01:53:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.461 [2024-07-14 01:53:30.045055] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:24.461 [2024-07-14 01:53:30.045164] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459325 ] 00:06:24.461 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.461 [2024-07-14 01:53:30.104129] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.719 [2024-07-14 01:53:30.194018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.719 [2024-07-14 01:53:30.194073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.719 [2024-07-14 01:53:30.194142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.719 [2024-07-14 01:53:30.194145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.719 01:53:30 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.719 01:53:30 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:24.719 01:53:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:24.719 01:53:30 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.719 01:53:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.719 [2024-07-14 01:53:30.275035] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:24.720 [2024-07-14 01:53:30.275064] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:24.720 [2024-07-14 01:53:30.275082] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:24.720 [2024-07-14 01:53:30.275093] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:24.720 [2024-07-14 01:53:30.275104] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:24.720 01:53:30 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.720 01:53:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:24.720 01:53:30 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.720 01:53:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.720 [2024-07-14 01:53:30.367041] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:24.720 01:53:30 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.720 01:53:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:24.720 01:53:30 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.720 01:53:30 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.720 01:53:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.720 ************************************ 00:06:24.720 START TEST scheduler_create_thread 00:06:24.720 ************************************ 00:06:24.720 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:24.720 01:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:24.720 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.720 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.720 2 00:06:24.720 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.720 01:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:24.720 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.720 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.720 3 00:06:24.720 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.720 01:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:24.720 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.720 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.978 4 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.978 5 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.978 6 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.978 7 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.978 8 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.978 9 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.978 10 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.978 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.543 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.543 00:06:25.543 real 0m0.587s 00:06:25.543 user 0m0.010s 00:06:25.543 sys 0m0.003s 00:06:25.543 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.543 01:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.543 ************************************ 00:06:25.543 END TEST scheduler_create_thread 00:06:25.543 ************************************ 00:06:25.543 01:53:31 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:25.543 01:53:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:25.543 01:53:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1459325 00:06:25.543 01:53:31 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1459325 ']' 00:06:25.543 01:53:31 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1459325 00:06:25.543 01:53:31 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:25.543 01:53:31 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.543 01:53:31 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1459325 00:06:25.543 01:53:31 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:25.543 01:53:31 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:25.543 01:53:31 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1459325' 00:06:25.543 killing process with pid 1459325 00:06:25.543 01:53:31 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1459325 00:06:25.543 01:53:31 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1459325 00:06:25.800 [2024-07-14 01:53:31.459118] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:26.057 00:06:26.057 real 0m1.719s 00:06:26.057 user 0m2.256s 00:06:26.057 sys 0m0.330s 00:06:26.057 01:53:31 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.057 01:53:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:26.057 ************************************ 00:06:26.057 END TEST event_scheduler 00:06:26.057 ************************************ 00:06:26.057 01:53:31 event -- common/autotest_common.sh@1142 -- # return 0 00:06:26.057 01:53:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:26.057 01:53:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:26.057 01:53:31 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.057 01:53:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.057 01:53:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.057 ************************************ 00:06:26.057 START TEST app_repeat 00:06:26.057 ************************************ 00:06:26.057 01:53:31 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:26.057 01:53:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.057 01:53:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.058 01:53:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:26.058 01:53:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.058 01:53:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:26.058 01:53:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:26.058 01:53:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:26.058 01:53:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1459628 00:06:26.058 01:53:31 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:26.058 01:53:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.058 01:53:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1459628' 00:06:26.058 Process app_repeat pid: 1459628 00:06:26.058 01:53:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:26.058 01:53:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:26.058 spdk_app_start Round 0 00:06:26.058 01:53:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1459628 /var/tmp/spdk-nbd.sock 00:06:26.058 01:53:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1459628 ']' 00:06:26.058 01:53:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.058 01:53:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.058 01:53:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.058 01:53:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.058 01:53:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.058 [2024-07-14 01:53:31.746994] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:26.058 [2024-07-14 01:53:31.747057] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459628 ] 00:06:26.315 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.316 [2024-07-14 01:53:31.811353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.316 [2024-07-14 01:53:31.906699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.316 [2024-07-14 01:53:31.906703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.574 01:53:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.574 01:53:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:26.574 01:53:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.574 Malloc0 00:06:26.834 01:53:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.092 Malloc1 00:06:27.092 01:53:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.092 01:53:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.092 01:53:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.092 01:53:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.092 01:53:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.092 01:53:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.092 01:53:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.092 01:53:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.092 01:53:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.092 01:53:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.092 01:53:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.092 01:53:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.092 01:53:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:27.092 01:53:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.092 01:53:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.092 01:53:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:27.350 /dev/nbd0 00:06:27.350 01:53:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.350 01:53:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.350 01:53:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:27.350 01:53:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:27.350 01:53:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:27.350 01:53:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:27.350 01:53:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:27.350 01:53:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:27.350 01:53:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:27.350 01:53:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:27.350 01:53:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.350 1+0 records in 00:06:27.350 1+0 records out 00:06:27.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017775 s, 23.0 MB/s 00:06:27.350 01:53:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.350 01:53:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:27.350 01:53:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.350 01:53:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:27.350 01:53:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:27.350 01:53:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.350 01:53:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.350 01:53:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.607 /dev/nbd1 00:06:27.607 01:53:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.607 01:53:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.607 01:53:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:27.607 01:53:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:27.607 01:53:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:27.607 01:53:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:27.607 01:53:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:27.607 01:53:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:27.607 01:53:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:27.607 01:53:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:27.607 01:53:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.607 1+0 records in 00:06:27.607 1+0 records out 00:06:27.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206389 s, 19.8 MB/s 00:06:27.607 01:53:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.607 01:53:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:27.607 01:53:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.607 01:53:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:27.607 01:53:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:27.607 01:53:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.607 01:53:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.607 01:53:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.607 01:53:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.607 01:53:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.864 { 00:06:27.864 "nbd_device": "/dev/nbd0", 00:06:27.864 "bdev_name": "Malloc0" 00:06:27.864 }, 00:06:27.864 { 00:06:27.864 "nbd_device": "/dev/nbd1", 00:06:27.864 "bdev_name": "Malloc1" 00:06:27.864 } 00:06:27.864 ]' 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.864 { 00:06:27.864 "nbd_device": "/dev/nbd0", 00:06:27.864 "bdev_name": "Malloc0" 00:06:27.864 }, 00:06:27.864 { 00:06:27.864 "nbd_device": "/dev/nbd1", 00:06:27.864 "bdev_name": "Malloc1" 00:06:27.864 } 00:06:27.864 ]' 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.864 /dev/nbd1' 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.864 /dev/nbd1' 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.864 256+0 records in 00:06:27.864 256+0 records out 00:06:27.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503341 s, 208 MB/s 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.864 256+0 records in 00:06:27.864 256+0 records out 00:06:27.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234174 s, 44.8 MB/s 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.864 256+0 records in 00:06:27.864 256+0 records out 00:06:27.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230968 s, 45.4 MB/s 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.864 01:53:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.122 01:53:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.122 01:53:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.122 01:53:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.122 01:53:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.122 01:53:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.122 01:53:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.122 01:53:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.122 01:53:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.122 01:53:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.122 01:53:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.380 01:53:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.380 01:53:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.380 01:53:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.380 01:53:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.380 01:53:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.380 01:53:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.380 01:53:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.380 01:53:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.380 01:53:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.380 01:53:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.380 01:53:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.637 01:53:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.637 01:53:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.637 01:53:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.637 01:53:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.637 01:53:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.637 01:53:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.637 01:53:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.637 01:53:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.637 01:53:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.637 01:53:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.637 01:53:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.637 01:53:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.637 01:53:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.895 01:53:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.153 [2024-07-14 01:53:34.803300] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.412 [2024-07-14 01:53:34.894092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.412 [2024-07-14 01:53:34.894092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.412 [2024-07-14 01:53:34.952066] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.412 [2024-07-14 01:53:34.952125] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.941 01:53:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.941 01:53:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:31.941 spdk_app_start Round 1 00:06:31.941 01:53:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1459628 /var/tmp/spdk-nbd.sock 00:06:31.941 01:53:37 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1459628 ']' 00:06:31.941 01:53:37 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.941 01:53:37 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.941 01:53:37 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.941 01:53:37 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.941 01:53:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.200 01:53:37 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.200 01:53:37 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:32.200 01:53:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.458 Malloc0 00:06:32.458 01:53:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.717 Malloc1 00:06:32.717 01:53:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.717 01:53:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.717 01:53:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.717 01:53:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.717 01:53:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.717 01:53:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.717 01:53:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.717 01:53:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.717 01:53:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.717 01:53:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.717 01:53:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.717 01:53:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.717 01:53:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.717 01:53:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.717 01:53:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.717 01:53:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.977 /dev/nbd0 00:06:32.977 01:53:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.977 01:53:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.977 01:53:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:32.977 01:53:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:32.977 01:53:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:32.977 01:53:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:32.977 01:53:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:32.977 01:53:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:32.977 01:53:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:32.977 01:53:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:32.977 01:53:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.977 1+0 records in 00:06:32.977 1+0 records out 00:06:32.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201222 s, 20.4 MB/s 00:06:32.977 01:53:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.977 01:53:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:32.977 01:53:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.977 01:53:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:32.977 01:53:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:32.977 01:53:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.977 01:53:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.977 01:53:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:33.268 /dev/nbd1 00:06:33.268 01:53:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.268 01:53:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.268 01:53:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:33.268 01:53:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:33.268 01:53:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:33.268 01:53:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:33.269 01:53:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:33.269 01:53:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:33.269 01:53:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:33.269 01:53:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:33.269 01:53:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.269 1+0 records in 00:06:33.269 1+0 records out 00:06:33.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193795 s, 21.1 MB/s 00:06:33.269 01:53:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:33.269 01:53:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:33.269 01:53:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:33.269 01:53:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:33.269 01:53:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:33.269 01:53:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.269 01:53:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.269 01:53:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.269 01:53:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.269 01:53:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.527 01:53:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.527 { 00:06:33.527 "nbd_device": "/dev/nbd0", 00:06:33.527 "bdev_name": "Malloc0" 00:06:33.527 }, 00:06:33.527 { 00:06:33.527 "nbd_device": "/dev/nbd1", 00:06:33.527 "bdev_name": "Malloc1" 00:06:33.527 } 00:06:33.527 ]' 00:06:33.527 01:53:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.527 { 00:06:33.527 "nbd_device": "/dev/nbd0", 00:06:33.527 "bdev_name": "Malloc0" 00:06:33.527 }, 00:06:33.527 { 00:06:33.527 "nbd_device": "/dev/nbd1", 00:06:33.527 "bdev_name": "Malloc1" 00:06:33.527 } 00:06:33.527 ]' 00:06:33.527 01:53:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.786 /dev/nbd1' 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.786 /dev/nbd1' 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.786 256+0 records in 00:06:33.786 256+0 records out 00:06:33.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490069 s, 214 MB/s 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.786 256+0 records in 00:06:33.786 256+0 records out 00:06:33.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202137 s, 51.9 MB/s 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.786 256+0 records in 00:06:33.786 256+0 records out 00:06:33.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231163 s, 45.4 MB/s 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.786 01:53:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.044 01:53:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.044 01:53:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.044 01:53:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.044 01:53:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.044 01:53:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.044 01:53:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.044 01:53:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.044 01:53:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.044 01:53:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.044 01:53:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.302 01:53:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.302 01:53:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.302 01:53:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.302 01:53:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.302 01:53:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.302 01:53:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.302 01:53:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.302 01:53:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.302 01:53:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.302 01:53:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.302 01:53:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.560 01:53:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.560 01:53:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.560 01:53:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.560 01:53:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.560 01:53:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.560 01:53:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.560 01:53:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.560 01:53:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.560 01:53:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.560 01:53:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.560 01:53:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.560 01:53:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.560 01:53:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.818 01:53:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:35.077 [2024-07-14 01:53:40.620526] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.077 [2024-07-14 01:53:40.710467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.077 [2024-07-14 01:53:40.710471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.336 [2024-07-14 01:53:40.773696] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.336 [2024-07-14 01:53:40.773763] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.863 01:53:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.863 01:53:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:37.863 spdk_app_start Round 2 00:06:37.863 01:53:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1459628 /var/tmp/spdk-nbd.sock 00:06:37.863 01:53:43 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1459628 ']' 00:06:37.863 01:53:43 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.863 01:53:43 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.863 01:53:43 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.863 01:53:43 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.863 01:53:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.120 01:53:43 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.120 01:53:43 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:38.120 01:53:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.377 Malloc0 00:06:38.377 01:53:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.634 Malloc1 00:06:38.634 01:53:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.634 01:53:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.634 01:53:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.634 01:53:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.634 01:53:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.634 01:53:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.634 01:53:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.634 01:53:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.634 01:53:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.634 01:53:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.634 01:53:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.634 01:53:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.634 01:53:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.634 01:53:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.634 01:53:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.634 01:53:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:38.891 /dev/nbd0 00:06:38.891 01:53:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:38.891 01:53:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:38.891 01:53:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:38.891 01:53:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:38.891 01:53:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:38.891 01:53:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:38.891 01:53:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:38.891 01:53:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:38.891 01:53:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:38.891 01:53:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:38.891 01:53:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.891 1+0 records in 00:06:38.891 1+0 records out 00:06:38.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269601 s, 15.2 MB/s 00:06:38.891 01:53:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.891 01:53:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:38.891 01:53:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.891 01:53:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:38.891 01:53:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:38.891 01:53:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.891 01:53:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.891 01:53:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.148 /dev/nbd1 00:06:39.148 01:53:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.148 01:53:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.148 01:53:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:39.148 01:53:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:39.148 01:53:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:39.148 01:53:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:39.148 01:53:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:39.148 01:53:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:39.148 01:53:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:39.148 01:53:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:39.148 01:53:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.148 1+0 records in 00:06:39.148 1+0 records out 00:06:39.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192787 s, 21.2 MB/s 00:06:39.148 01:53:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:39.148 01:53:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:39.149 01:53:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:39.149 01:53:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:39.149 01:53:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:39.149 01:53:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.149 01:53:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.149 01:53:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.149 01:53:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.149 01:53:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.407 01:53:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.407 { 00:06:39.407 "nbd_device": "/dev/nbd0", 00:06:39.407 "bdev_name": "Malloc0" 00:06:39.407 }, 00:06:39.407 { 00:06:39.407 "nbd_device": "/dev/nbd1", 00:06:39.407 "bdev_name": "Malloc1" 00:06:39.407 } 00:06:39.407 ]' 00:06:39.407 01:53:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.407 { 00:06:39.407 "nbd_device": "/dev/nbd0", 00:06:39.407 "bdev_name": "Malloc0" 00:06:39.407 }, 00:06:39.407 { 00:06:39.407 "nbd_device": "/dev/nbd1", 00:06:39.407 "bdev_name": "Malloc1" 00:06:39.407 } 00:06:39.407 ]' 00:06:39.407 01:53:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.407 /dev/nbd1' 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.407 /dev/nbd1' 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.407 256+0 records in 00:06:39.407 256+0 records out 00:06:39.407 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504744 s, 208 MB/s 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.407 256+0 records in 00:06:39.407 256+0 records out 00:06:39.407 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238277 s, 44.0 MB/s 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.407 256+0 records in 00:06:39.407 256+0 records out 00:06:39.407 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251414 s, 41.7 MB/s 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.407 01:53:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.666 01:53:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.666 01:53:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.666 01:53:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.666 01:53:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.666 01:53:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.666 01:53:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.666 01:53:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.666 01:53:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.666 01:53:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.666 01:53:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.925 01:53:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.925 01:53:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.925 01:53:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.925 01:53:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.925 01:53:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.925 01:53:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.925 01:53:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.925 01:53:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.925 01:53:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.925 01:53:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.925 01:53:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.490 01:53:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.490 01:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.490 01:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.490 01:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.490 01:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.490 01:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.490 01:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:40.490 01:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.490 01:53:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.490 01:53:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.490 01:53:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.490 01:53:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.490 01:53:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:40.747 01:53:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.747 [2024-07-14 01:53:46.424323] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.004 [2024-07-14 01:53:46.515288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.004 [2024-07-14 01:53:46.515293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.004 [2024-07-14 01:53:46.577193] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.004 [2024-07-14 01:53:46.577285] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.528 01:53:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1459628 /var/tmp/spdk-nbd.sock 00:06:43.528 01:53:49 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1459628 ']' 00:06:43.528 01:53:49 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.528 01:53:49 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.528 01:53:49 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.528 01:53:49 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.528 01:53:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.786 01:53:49 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.786 01:53:49 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:43.786 01:53:49 event.app_repeat -- event/event.sh@39 -- # killprocess 1459628 00:06:43.786 01:53:49 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1459628 ']' 00:06:43.786 01:53:49 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1459628 00:06:43.786 01:53:49 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:43.786 01:53:49 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.786 01:53:49 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1459628 00:06:43.786 01:53:49 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.786 01:53:49 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.786 01:53:49 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1459628' 00:06:43.786 killing process with pid 1459628 00:06:43.786 01:53:49 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1459628 00:06:43.786 01:53:49 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1459628 00:06:44.044 spdk_app_start is called in Round 0. 00:06:44.044 Shutdown signal received, stop current app iteration 00:06:44.044 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:44.044 spdk_app_start is called in Round 1. 00:06:44.044 Shutdown signal received, stop current app iteration 00:06:44.044 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:44.044 spdk_app_start is called in Round 2. 00:06:44.044 Shutdown signal received, stop current app iteration 00:06:44.044 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:44.044 spdk_app_start is called in Round 3. 00:06:44.044 Shutdown signal received, stop current app iteration 00:06:44.044 01:53:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:44.044 01:53:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:44.044 00:06:44.044 real 0m17.960s 00:06:44.044 user 0m39.136s 00:06:44.044 sys 0m3.175s 00:06:44.044 01:53:49 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.044 01:53:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.044 ************************************ 00:06:44.044 END TEST app_repeat 00:06:44.044 ************************************ 00:06:44.044 01:53:49 event -- common/autotest_common.sh@1142 -- # return 0 00:06:44.044 01:53:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:44.044 01:53:49 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:44.044 01:53:49 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.044 01:53:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.044 01:53:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.044 ************************************ 00:06:44.044 START TEST cpu_locks 00:06:44.044 ************************************ 00:06:44.044 01:53:49 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:44.301 * Looking for test storage... 00:06:44.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:44.301 01:53:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:44.301 01:53:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:44.301 01:53:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:44.301 01:53:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:44.301 01:53:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.301 01:53:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.301 01:53:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.301 ************************************ 00:06:44.301 START TEST default_locks 00:06:44.301 ************************************ 00:06:44.302 01:53:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:44.302 01:53:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1461996 00:06:44.302 01:53:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.302 01:53:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1461996 00:06:44.302 01:53:49 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1461996 ']' 00:06:44.302 01:53:49 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.302 01:53:49 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.302 01:53:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.302 01:53:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.302 01:53:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.302 [2024-07-14 01:53:49.861566] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:44.302 [2024-07-14 01:53:49.861659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461996 ] 00:06:44.302 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.302 [2024-07-14 01:53:49.919888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.559 [2024-07-14 01:53:50.006148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.817 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.817 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:44.817 01:53:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1461996 00:06:44.817 01:53:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1461996 00:06:44.817 01:53:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.076 lslocks: write error 00:06:45.076 01:53:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1461996 00:06:45.076 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1461996 ']' 00:06:45.076 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1461996 00:06:45.076 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:45.076 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.076 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1461996 00:06:45.076 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.076 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.076 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1461996' 00:06:45.076 killing process with pid 1461996 00:06:45.076 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1461996 00:06:45.076 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1461996 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1461996 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1461996 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1461996 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1461996 ']' 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1461996) - No such process 00:06:45.335 ERROR: process (pid: 1461996) is no longer running 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:45.335 00:06:45.335 real 0m1.147s 00:06:45.335 user 0m1.095s 00:06:45.335 sys 0m0.508s 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.335 01:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.335 ************************************ 00:06:45.335 END TEST default_locks 00:06:45.335 ************************************ 00:06:45.335 01:53:50 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:45.335 01:53:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:45.335 01:53:50 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.335 01:53:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.335 01:53:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.335 ************************************ 00:06:45.335 START TEST default_locks_via_rpc 00:06:45.335 ************************************ 00:06:45.335 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:45.335 01:53:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1462160 00:06:45.335 01:53:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.335 01:53:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1462160 00:06:45.335 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1462160 ']' 00:06:45.335 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.335 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.335 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.335 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.335 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.593 [2024-07-14 01:53:51.055049] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:45.593 [2024-07-14 01:53:51.055150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462160 ] 00:06:45.593 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.593 [2024-07-14 01:53:51.114304] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.593 [2024-07-14 01:53:51.203191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1462160 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1462160 00:06:45.852 01:53:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.109 01:53:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1462160 00:06:46.109 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1462160 ']' 00:06:46.109 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1462160 00:06:46.109 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:46.109 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.109 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1462160 00:06:46.368 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.368 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.368 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1462160' 00:06:46.368 killing process with pid 1462160 00:06:46.368 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1462160 00:06:46.368 01:53:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1462160 00:06:46.628 00:06:46.628 real 0m1.220s 00:06:46.628 user 0m1.162s 00:06:46.628 sys 0m0.524s 00:06:46.628 01:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.628 01:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.628 ************************************ 00:06:46.628 END TEST default_locks_via_rpc 00:06:46.628 ************************************ 00:06:46.628 01:53:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:46.628 01:53:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:46.628 01:53:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.628 01:53:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.628 01:53:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.628 ************************************ 00:06:46.628 START TEST non_locking_app_on_locked_coremask 00:06:46.628 ************************************ 00:06:46.628 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:46.628 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1462326 00:06:46.628 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.628 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1462326 /var/tmp/spdk.sock 00:06:46.628 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1462326 ']' 00:06:46.628 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.628 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.628 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.628 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.628 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.887 [2024-07-14 01:53:52.326439] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:46.888 [2024-07-14 01:53:52.326530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462326 ] 00:06:46.888 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.888 [2024-07-14 01:53:52.388960] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.888 [2024-07-14 01:53:52.477305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.146 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.146 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:47.146 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1462453 00:06:47.146 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:47.146 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1462453 /var/tmp/spdk2.sock 00:06:47.146 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1462453 ']' 00:06:47.146 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.146 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.146 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.146 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.146 01:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.146 [2024-07-14 01:53:52.798164] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:47.146 [2024-07-14 01:53:52.798256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462453 ] 00:06:47.146 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.404 [2024-07-14 01:53:52.887138] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.404 [2024-07-14 01:53:52.887195] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.404 [2024-07-14 01:53:53.066393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.382 01:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.382 01:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:48.382 01:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1462326 00:06:48.382 01:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1462326 00:06:48.382 01:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.640 lslocks: write error 00:06:48.640 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1462326 00:06:48.640 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1462326 ']' 00:06:48.640 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1462326 00:06:48.640 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:48.640 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.640 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1462326 00:06:48.640 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.640 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.640 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1462326' 00:06:48.640 killing process with pid 1462326 00:06:48.640 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1462326 00:06:48.640 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1462326 00:06:49.576 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1462453 00:06:49.576 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1462453 ']' 00:06:49.576 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1462453 00:06:49.576 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:49.576 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.576 01:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1462453 00:06:49.576 01:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.576 01:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.576 01:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1462453' 00:06:49.576 killing process with pid 1462453 00:06:49.576 01:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1462453 00:06:49.576 01:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1462453 00:06:49.835 00:06:49.835 real 0m3.145s 00:06:49.835 user 0m3.268s 00:06:49.835 sys 0m1.027s 00:06:49.835 01:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.835 01:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.835 ************************************ 00:06:49.835 END TEST non_locking_app_on_locked_coremask 00:06:49.835 ************************************ 00:06:49.835 01:53:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:49.835 01:53:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:49.835 01:53:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.835 01:53:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.835 01:53:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.835 ************************************ 00:06:49.835 START TEST locking_app_on_unlocked_coremask 00:06:49.835 ************************************ 00:06:49.835 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:49.835 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1462763 00:06:49.835 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:49.835 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1462763 /var/tmp/spdk.sock 00:06:49.835 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1462763 ']' 00:06:49.835 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.835 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.836 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.836 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.836 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.836 [2024-07-14 01:53:55.524143] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:49.836 [2024-07-14 01:53:55.524229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462763 ] 00:06:50.095 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.095 [2024-07-14 01:53:55.586816] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.095 [2024-07-14 01:53:55.586853] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.095 [2024-07-14 01:53:55.674834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.353 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.353 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:50.353 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1462831 00:06:50.353 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1462831 /var/tmp/spdk2.sock 00:06:50.353 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:50.353 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1462831 ']' 00:06:50.353 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.353 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.353 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.353 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.353 01:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.353 [2024-07-14 01:53:55.983811] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:50.353 [2024-07-14 01:53:55.983916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462831 ] 00:06:50.353 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.612 [2024-07-14 01:53:56.076689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.612 [2024-07-14 01:53:56.260591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.546 01:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.546 01:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:51.546 01:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1462831 00:06:51.546 01:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1462831 00:06:51.546 01:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.804 lslocks: write error 00:06:51.805 01:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1462763 00:06:51.805 01:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1462763 ']' 00:06:51.805 01:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1462763 00:06:51.805 01:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:51.805 01:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.805 01:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1462763 00:06:51.805 01:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.805 01:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.805 01:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1462763' 00:06:51.805 killing process with pid 1462763 00:06:51.805 01:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1462763 00:06:51.805 01:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1462763 00:06:52.739 01:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1462831 00:06:52.739 01:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1462831 ']' 00:06:52.739 01:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1462831 00:06:52.739 01:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:52.739 01:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.739 01:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1462831 00:06:52.739 01:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.739 01:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.739 01:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1462831' 00:06:52.739 killing process with pid 1462831 00:06:52.739 01:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1462831 00:06:52.739 01:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1462831 00:06:53.305 00:06:53.305 real 0m3.219s 00:06:53.305 user 0m3.336s 00:06:53.305 sys 0m1.065s 00:06:53.305 01:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.305 01:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.305 ************************************ 00:06:53.305 END TEST locking_app_on_unlocked_coremask 00:06:53.305 ************************************ 00:06:53.305 01:53:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:53.305 01:53:58 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:53.305 01:53:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.305 01:53:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.305 01:53:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.305 ************************************ 00:06:53.305 START TEST locking_app_on_locked_coremask 00:06:53.305 ************************************ 00:06:53.305 01:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:53.305 01:53:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1463197 00:06:53.305 01:53:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.305 01:53:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1463197 /var/tmp/spdk.sock 00:06:53.305 01:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1463197 ']' 00:06:53.305 01:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.305 01:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.305 01:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.305 01:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.305 01:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.305 [2024-07-14 01:53:58.793555] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:53.305 [2024-07-14 01:53:58.793645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463197 ] 00:06:53.305 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.305 [2024-07-14 01:53:58.859618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.305 [2024-07-14 01:53:58.948225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1463206 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1463206 /var/tmp/spdk2.sock 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1463206 /var/tmp/spdk2.sock 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1463206 /var/tmp/spdk2.sock 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1463206 ']' 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.563 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.821 [2024-07-14 01:53:59.262895] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:53.821 [2024-07-14 01:53:59.262985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463206 ] 00:06:53.821 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.821 [2024-07-14 01:53:59.362330] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1463197 has claimed it. 00:06:53.821 [2024-07-14 01:53:59.362390] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:54.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1463206) - No such process 00:06:54.386 ERROR: process (pid: 1463206) is no longer running 00:06:54.386 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.386 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:54.386 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:54.386 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:54.386 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:54.386 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:54.386 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1463197 00:06:54.386 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1463197 00:06:54.386 01:53:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.644 lslocks: write error 00:06:54.644 01:54:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1463197 00:06:54.644 01:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1463197 ']' 00:06:54.644 01:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1463197 00:06:54.645 01:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:54.645 01:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.645 01:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1463197 00:06:54.645 01:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.645 01:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.645 01:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1463197' 00:06:54.645 killing process with pid 1463197 00:06:54.645 01:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1463197 00:06:54.645 01:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1463197 00:06:55.211 00:06:55.211 real 0m1.861s 00:06:55.211 user 0m2.019s 00:06:55.211 sys 0m0.605s 00:06:55.211 01:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.211 01:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.211 ************************************ 00:06:55.211 END TEST locking_app_on_locked_coremask 00:06:55.211 ************************************ 00:06:55.211 01:54:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:55.211 01:54:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:55.211 01:54:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.211 01:54:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.211 01:54:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.211 ************************************ 00:06:55.211 START TEST locking_overlapped_coremask 00:06:55.211 ************************************ 00:06:55.211 01:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:55.211 01:54:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1463537 00:06:55.211 01:54:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:55.211 01:54:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1463537 /var/tmp/spdk.sock 00:06:55.211 01:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1463537 ']' 00:06:55.211 01:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.211 01:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.212 01:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.212 01:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.212 01:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.212 [2024-07-14 01:54:00.702908] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:55.212 [2024-07-14 01:54:00.702993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463537 ] 00:06:55.212 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.212 [2024-07-14 01:54:00.762591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.212 [2024-07-14 01:54:00.850118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.212 [2024-07-14 01:54:00.850184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.212 [2024-07-14 01:54:00.850187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1463568 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1463568 /var/tmp/spdk2.sock 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1463568 /var/tmp/spdk2.sock 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1463568 /var/tmp/spdk2.sock 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1463568 ']' 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.469 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.469 [2024-07-14 01:54:01.139583] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:55.469 [2024-07-14 01:54:01.139673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463568 ] 00:06:55.726 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.726 [2024-07-14 01:54:01.231575] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1463537 has claimed it. 00:06:55.726 [2024-07-14 01:54:01.231622] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:56.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1463568) - No such process 00:06:56.289 ERROR: process (pid: 1463568) is no longer running 00:06:56.289 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1463537 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1463537 ']' 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1463537 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1463537 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1463537' 00:06:56.290 killing process with pid 1463537 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1463537 00:06:56.290 01:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1463537 00:06:56.854 00:06:56.854 real 0m1.626s 00:06:56.854 user 0m4.425s 00:06:56.854 sys 0m0.440s 00:06:56.854 01:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.854 01:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.854 ************************************ 00:06:56.854 END TEST locking_overlapped_coremask 00:06:56.854 ************************************ 00:06:56.854 01:54:02 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:56.854 01:54:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:56.854 01:54:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.854 01:54:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.854 01:54:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.854 ************************************ 00:06:56.854 START TEST locking_overlapped_coremask_via_rpc 00:06:56.854 ************************************ 00:06:56.854 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:56.854 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1463781 00:06:56.854 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:56.854 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1463781 /var/tmp/spdk.sock 00:06:56.854 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1463781 ']' 00:06:56.854 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.854 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.854 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.854 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.854 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.854 [2024-07-14 01:54:02.382809] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:56.854 [2024-07-14 01:54:02.382921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463781 ] 00:06:56.854 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.854 [2024-07-14 01:54:02.446850] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.854 [2024-07-14 01:54:02.446890] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.854 [2024-07-14 01:54:02.537687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.854 [2024-07-14 01:54:02.537755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.854 [2024-07-14 01:54:02.537758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.112 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.112 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:57.112 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1463903 00:06:57.112 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1463903 /var/tmp/spdk2.sock 00:06:57.112 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:57.112 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1463903 ']' 00:06:57.112 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.112 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.112 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.112 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.112 01:54:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.370 [2024-07-14 01:54:02.837197] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:57.370 [2024-07-14 01:54:02.837281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463903 ] 00:06:57.370 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.370 [2024-07-14 01:54:02.926839] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.370 [2024-07-14 01:54:02.926900] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.628 [2024-07-14 01:54:03.109707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.628 [2024-07-14 01:54:03.109770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:57.628 [2024-07-14 01:54:03.109772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.191 [2024-07-14 01:54:03.793966] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1463781 has claimed it. 00:06:58.191 request: 00:06:58.191 { 00:06:58.191 "method": "framework_enable_cpumask_locks", 00:06:58.191 "req_id": 1 00:06:58.191 } 00:06:58.191 Got JSON-RPC error response 00:06:58.191 response: 00:06:58.191 { 00:06:58.191 "code": -32603, 00:06:58.191 "message": "Failed to claim CPU core: 2" 00:06:58.191 } 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1463781 /var/tmp/spdk.sock 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1463781 ']' 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.191 01:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.448 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.448 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:58.448 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1463903 /var/tmp/spdk2.sock 00:06:58.448 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1463903 ']' 00:06:58.448 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.448 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.448 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.448 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.448 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.706 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.706 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:58.706 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:58.706 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:58.706 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:58.707 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:58.707 00:06:58.707 real 0m1.971s 00:06:58.707 user 0m1.022s 00:06:58.707 sys 0m0.174s 00:06:58.707 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.707 01:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.707 ************************************ 00:06:58.707 END TEST locking_overlapped_coremask_via_rpc 00:06:58.707 ************************************ 00:06:58.707 01:54:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:58.707 01:54:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:58.707 01:54:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1463781 ]] 00:06:58.707 01:54:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1463781 00:06:58.707 01:54:04 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1463781 ']' 00:06:58.707 01:54:04 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1463781 00:06:58.707 01:54:04 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:58.707 01:54:04 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:58.707 01:54:04 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1463781 00:06:58.707 01:54:04 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:58.707 01:54:04 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:58.707 01:54:04 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1463781' 00:06:58.707 killing process with pid 1463781 00:06:58.707 01:54:04 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1463781 00:06:58.707 01:54:04 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1463781 00:06:59.273 01:54:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1463903 ]] 00:06:59.273 01:54:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1463903 00:06:59.273 01:54:04 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1463903 ']' 00:06:59.273 01:54:04 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1463903 00:06:59.273 01:54:04 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:59.273 01:54:04 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.273 01:54:04 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1463903 00:06:59.273 01:54:04 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:59.273 01:54:04 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:59.273 01:54:04 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1463903' 00:06:59.273 killing process with pid 1463903 00:06:59.273 01:54:04 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1463903 00:06:59.273 01:54:04 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1463903 00:06:59.531 01:54:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:59.531 01:54:05 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:59.531 01:54:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1463781 ]] 00:06:59.531 01:54:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1463781 00:06:59.531 01:54:05 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1463781 ']' 00:06:59.531 01:54:05 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1463781 00:06:59.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1463781) - No such process 00:06:59.531 01:54:05 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1463781 is not found' 00:06:59.531 Process with pid 1463781 is not found 00:06:59.531 01:54:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1463903 ]] 00:06:59.531 01:54:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1463903 00:06:59.531 01:54:05 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1463903 ']' 00:06:59.531 01:54:05 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1463903 00:06:59.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1463903) - No such process 00:06:59.531 01:54:05 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1463903 is not found' 00:06:59.531 Process with pid 1463903 is not found 00:06:59.531 01:54:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:59.531 00:06:59.531 real 0m15.452s 00:06:59.531 user 0m27.088s 00:06:59.531 sys 0m5.248s 00:06:59.531 01:54:05 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.531 01:54:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.531 ************************************ 00:06:59.531 END TEST cpu_locks 00:06:59.531 ************************************ 00:06:59.531 01:54:05 event -- common/autotest_common.sh@1142 -- # return 0 00:06:59.531 00:06:59.531 real 0m39.244s 00:06:59.531 user 1m15.108s 00:06:59.531 sys 0m9.246s 00:06:59.531 01:54:05 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.531 01:54:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.531 ************************************ 00:06:59.531 END TEST event 00:06:59.531 ************************************ 00:06:59.789 01:54:05 -- common/autotest_common.sh@1142 -- # return 0 00:06:59.789 01:54:05 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:59.789 01:54:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.789 01:54:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.789 01:54:05 -- common/autotest_common.sh@10 -- # set +x 00:06:59.789 ************************************ 00:06:59.789 START TEST thread 00:06:59.789 ************************************ 00:06:59.789 01:54:05 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:59.789 * Looking for test storage... 00:06:59.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:59.789 01:54:05 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.789 01:54:05 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:59.789 01:54:05 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.789 01:54:05 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.789 ************************************ 00:06:59.789 START TEST thread_poller_perf 00:06:59.789 ************************************ 00:06:59.789 01:54:05 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.789 [2024-07-14 01:54:05.334267] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:59.789 [2024-07-14 01:54:05.334328] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464277 ] 00:06:59.789 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.789 [2024-07-14 01:54:05.396858] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.046 [2024-07-14 01:54:05.486262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.046 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:00.977 ====================================== 00:07:00.977 busy:2707670624 (cyc) 00:07:00.977 total_run_count: 285000 00:07:00.977 tsc_hz: 2700000000 (cyc) 00:07:00.977 ====================================== 00:07:00.977 poller_cost: 9500 (cyc), 3518 (nsec) 00:07:00.977 00:07:00.977 real 0m1.251s 00:07:00.977 user 0m1.164s 00:07:00.977 sys 0m0.079s 00:07:00.977 01:54:06 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.977 01:54:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.977 ************************************ 00:07:00.977 END TEST thread_poller_perf 00:07:00.977 ************************************ 00:07:00.977 01:54:06 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:00.977 01:54:06 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.977 01:54:06 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:00.977 01:54:06 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.977 01:54:06 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.977 ************************************ 00:07:00.977 START TEST thread_poller_perf 00:07:00.977 ************************************ 00:07:00.977 01:54:06 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.977 [2024-07-14 01:54:06.630548] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:00.977 [2024-07-14 01:54:06.630611] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464433 ] 00:07:00.977 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.236 [2024-07-14 01:54:06.697415] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.236 [2024-07-14 01:54:06.791187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.236 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:02.256 ====================================== 00:07:02.256 busy:2702564826 (cyc) 00:07:02.256 total_run_count: 3857000 00:07:02.256 tsc_hz: 2700000000 (cyc) 00:07:02.256 ====================================== 00:07:02.256 poller_cost: 700 (cyc), 259 (nsec) 00:07:02.256 00:07:02.256 real 0m1.253s 00:07:02.256 user 0m1.161s 00:07:02.256 sys 0m0.086s 00:07:02.256 01:54:07 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.256 01:54:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:02.256 ************************************ 00:07:02.256 END TEST thread_poller_perf 00:07:02.256 ************************************ 00:07:02.256 01:54:07 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:02.256 01:54:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:02.256 00:07:02.256 real 0m2.644s 00:07:02.256 user 0m2.384s 00:07:02.256 sys 0m0.257s 00:07:02.256 01:54:07 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.256 01:54:07 thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.256 ************************************ 00:07:02.256 END TEST thread 00:07:02.256 ************************************ 00:07:02.256 01:54:07 -- common/autotest_common.sh@1142 -- # return 0 00:07:02.256 01:54:07 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:02.256 01:54:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.256 01:54:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.256 01:54:07 -- common/autotest_common.sh@10 -- # set +x 00:07:02.256 ************************************ 00:07:02.256 START TEST accel 00:07:02.257 ************************************ 00:07:02.257 01:54:07 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:02.515 * Looking for test storage... 00:07:02.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:02.515 01:54:07 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:02.515 01:54:07 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:02.515 01:54:07 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:02.515 01:54:07 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1464944 00:07:02.515 01:54:07 accel -- accel/accel.sh@63 -- # waitforlisten 1464944 00:07:02.515 01:54:07 accel -- common/autotest_common.sh@829 -- # '[' -z 1464944 ']' 00:07:02.515 01:54:07 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.515 01:54:07 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:02.515 01:54:07 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:02.515 01:54:07 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.515 01:54:07 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.515 01:54:07 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.515 01:54:07 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.515 01:54:07 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.515 01:54:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.515 01:54:07 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.515 01:54:07 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.515 01:54:07 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.515 01:54:07 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:02.515 01:54:07 accel -- accel/accel.sh@41 -- # jq -r . 00:07:02.515 [2024-07-14 01:54:08.044236] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:02.515 [2024-07-14 01:54:08.044337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464944 ] 00:07:02.515 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.515 [2024-07-14 01:54:08.110481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.515 [2024-07-14 01:54:08.200769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.773 01:54:08 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.773 01:54:08 accel -- common/autotest_common.sh@862 -- # return 0 00:07:02.773 01:54:08 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:02.773 01:54:08 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:02.773 01:54:08 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:02.773 01:54:08 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:02.773 01:54:08 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:02.773 01:54:08 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:02.773 01:54:08 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.773 01:54:08 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:02.773 01:54:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.032 01:54:08 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.032 01:54:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.032 01:54:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.032 01:54:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.032 01:54:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.032 01:54:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.032 01:54:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.032 01:54:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.032 01:54:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.032 01:54:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.032 01:54:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.032 01:54:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.032 01:54:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.032 01:54:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.032 01:54:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.032 01:54:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.032 01:54:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.032 01:54:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.032 01:54:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.032 01:54:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.032 01:54:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.032 01:54:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.032 01:54:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.032 01:54:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.032 01:54:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.032 01:54:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.032 01:54:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.032 01:54:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.032 01:54:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.032 01:54:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.033 01:54:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.033 01:54:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.033 01:54:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.033 01:54:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.033 01:54:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.033 01:54:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.033 01:54:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.033 01:54:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.033 01:54:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.033 01:54:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.033 01:54:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.033 01:54:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.033 01:54:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.033 01:54:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.033 01:54:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.033 01:54:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.033 01:54:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.033 01:54:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.033 01:54:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.033 01:54:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.033 01:54:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.033 01:54:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.033 01:54:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.033 01:54:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.033 01:54:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.033 01:54:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.033 01:54:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.033 01:54:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.033 01:54:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.033 01:54:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.033 01:54:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.033 01:54:08 accel -- accel/accel.sh@75 -- # killprocess 1464944 00:07:03.033 01:54:08 accel -- common/autotest_common.sh@948 -- # '[' -z 1464944 ']' 00:07:03.033 01:54:08 accel -- common/autotest_common.sh@952 -- # kill -0 1464944 00:07:03.033 01:54:08 accel -- common/autotest_common.sh@953 -- # uname 00:07:03.033 01:54:08 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.033 01:54:08 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1464944 00:07:03.033 01:54:08 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.033 01:54:08 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.033 01:54:08 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1464944' 00:07:03.033 killing process with pid 1464944 00:07:03.033 01:54:08 accel -- common/autotest_common.sh@967 -- # kill 1464944 00:07:03.033 01:54:08 accel -- common/autotest_common.sh@972 -- # wait 1464944 00:07:03.291 01:54:08 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:03.291 01:54:08 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:03.291 01:54:08 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:03.292 01:54:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.292 01:54:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.292 01:54:08 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:03.292 01:54:08 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:03.292 01:54:08 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:03.292 01:54:08 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.292 01:54:08 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.292 01:54:08 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.292 01:54:08 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.292 01:54:08 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.292 01:54:08 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:03.292 01:54:08 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:03.292 01:54:08 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.292 01:54:08 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:03.292 01:54:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.292 01:54:08 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:03.292 01:54:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:03.292 01:54:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.292 01:54:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.550 ************************************ 00:07:03.550 START TEST accel_missing_filename 00:07:03.550 ************************************ 00:07:03.550 01:54:08 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:03.550 01:54:08 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:03.550 01:54:08 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:03.550 01:54:08 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:03.550 01:54:08 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.550 01:54:08 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:03.550 01:54:08 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.550 01:54:08 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:03.550 01:54:08 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:03.550 01:54:08 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:03.550 01:54:08 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.550 01:54:08 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.550 01:54:08 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.550 01:54:08 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.550 01:54:08 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.550 01:54:08 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:03.550 01:54:08 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:03.550 [2024-07-14 01:54:09.007617] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:03.550 [2024-07-14 01:54:09.007684] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465293 ] 00:07:03.550 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.550 [2024-07-14 01:54:09.068980] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.550 [2024-07-14 01:54:09.159207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.550 [2024-07-14 01:54:09.216638] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.810 [2024-07-14 01:54:09.288917] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:03.810 A filename is required. 00:07:03.810 01:54:09 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:03.810 01:54:09 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.810 01:54:09 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:03.810 01:54:09 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:03.810 01:54:09 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:03.810 01:54:09 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.810 00:07:03.810 real 0m0.373s 00:07:03.810 user 0m0.273s 00:07:03.810 sys 0m0.132s 00:07:03.810 01:54:09 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.810 01:54:09 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:03.810 ************************************ 00:07:03.810 END TEST accel_missing_filename 00:07:03.810 ************************************ 00:07:03.810 01:54:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.810 01:54:09 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:03.810 01:54:09 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:03.810 01:54:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.810 01:54:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.810 ************************************ 00:07:03.810 START TEST accel_compress_verify 00:07:03.810 ************************************ 00:07:03.810 01:54:09 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:03.810 01:54:09 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:03.810 01:54:09 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:03.810 01:54:09 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:03.810 01:54:09 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.810 01:54:09 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:03.810 01:54:09 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.810 01:54:09 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:03.810 01:54:09 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:03.810 01:54:09 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:03.810 01:54:09 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.810 01:54:09 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.810 01:54:09 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.810 01:54:09 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.810 01:54:09 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.810 01:54:09 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:03.810 01:54:09 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:03.810 [2024-07-14 01:54:09.431603] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:03.810 [2024-07-14 01:54:09.431679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465334 ] 00:07:03.810 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.810 [2024-07-14 01:54:09.496370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.070 [2024-07-14 01:54:09.593097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.070 [2024-07-14 01:54:09.654564] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.070 [2024-07-14 01:54:09.738504] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:04.331 00:07:04.331 Compression does not support the verify option, aborting. 00:07:04.331 01:54:09 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:04.331 01:54:09 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.331 01:54:09 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:04.331 01:54:09 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:04.331 01:54:09 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:04.331 01:54:09 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.331 00:07:04.331 real 0m0.408s 00:07:04.331 user 0m0.289s 00:07:04.331 sys 0m0.153s 00:07:04.331 01:54:09 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.331 01:54:09 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:04.331 ************************************ 00:07:04.331 END TEST accel_compress_verify 00:07:04.331 ************************************ 00:07:04.331 01:54:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.331 01:54:09 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:04.331 01:54:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:04.331 01:54:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.331 01:54:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.331 ************************************ 00:07:04.331 START TEST accel_wrong_workload 00:07:04.331 ************************************ 00:07:04.331 01:54:09 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:04.331 01:54:09 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:04.331 01:54:09 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:04.331 01:54:09 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:04.331 01:54:09 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.331 01:54:09 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:04.331 01:54:09 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.331 01:54:09 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:04.331 01:54:09 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:04.331 01:54:09 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:04.331 01:54:09 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.331 01:54:09 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.331 01:54:09 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.331 01:54:09 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.331 01:54:09 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.331 01:54:09 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:04.331 01:54:09 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:04.331 Unsupported workload type: foobar 00:07:04.331 [2024-07-14 01:54:09.887907] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:04.331 accel_perf options: 00:07:04.331 [-h help message] 00:07:04.331 [-q queue depth per core] 00:07:04.331 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:04.331 [-T number of threads per core 00:07:04.331 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:04.331 [-t time in seconds] 00:07:04.331 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:04.331 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:04.331 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:04.331 [-l for compress/decompress workloads, name of uncompressed input file 00:07:04.331 [-S for crc32c workload, use this seed value (default 0) 00:07:04.331 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:04.331 [-f for fill workload, use this BYTE value (default 255) 00:07:04.331 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:04.331 [-y verify result if this switch is on] 00:07:04.331 [-a tasks to allocate per core (default: same value as -q)] 00:07:04.331 Can be used to spread operations across a wider range of memory. 00:07:04.331 01:54:09 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:04.331 01:54:09 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.331 01:54:09 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:04.331 01:54:09 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.331 00:07:04.331 real 0m0.024s 00:07:04.331 user 0m0.014s 00:07:04.331 sys 0m0.010s 00:07:04.331 01:54:09 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.331 01:54:09 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:04.331 ************************************ 00:07:04.331 END TEST accel_wrong_workload 00:07:04.331 ************************************ 00:07:04.331 Error: writing output failed: Broken pipe 00:07:04.331 01:54:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.331 01:54:09 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:04.331 01:54:09 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:04.331 01:54:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.331 01:54:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.331 ************************************ 00:07:04.331 START TEST accel_negative_buffers 00:07:04.331 ************************************ 00:07:04.331 01:54:09 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:04.331 01:54:09 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:04.331 01:54:09 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:04.331 01:54:09 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:04.331 01:54:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.331 01:54:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:04.331 01:54:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.331 01:54:09 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:04.331 01:54:09 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:04.331 01:54:09 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:04.331 01:54:09 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.331 01:54:09 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.331 01:54:09 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.331 01:54:09 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.331 01:54:09 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.331 01:54:09 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:04.331 01:54:09 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:04.331 -x option must be non-negative. 00:07:04.331 [2024-07-14 01:54:09.950796] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:04.331 accel_perf options: 00:07:04.331 [-h help message] 00:07:04.331 [-q queue depth per core] 00:07:04.331 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:04.331 [-T number of threads per core 00:07:04.331 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:04.331 [-t time in seconds] 00:07:04.331 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:04.331 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:04.331 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:04.332 [-l for compress/decompress workloads, name of uncompressed input file 00:07:04.332 [-S for crc32c workload, use this seed value (default 0) 00:07:04.332 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:04.332 [-f for fill workload, use this BYTE value (default 255) 00:07:04.332 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:04.332 [-y verify result if this switch is on] 00:07:04.332 [-a tasks to allocate per core (default: same value as -q)] 00:07:04.332 Can be used to spread operations across a wider range of memory. 00:07:04.332 01:54:09 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:04.332 01:54:09 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.332 01:54:09 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:04.332 01:54:09 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.332 00:07:04.332 real 0m0.021s 00:07:04.332 user 0m0.012s 00:07:04.332 sys 0m0.009s 00:07:04.332 01:54:09 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.332 01:54:09 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:04.332 ************************************ 00:07:04.332 END TEST accel_negative_buffers 00:07:04.332 ************************************ 00:07:04.332 Error: writing output failed: Broken pipe 00:07:04.332 01:54:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.332 01:54:09 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:04.332 01:54:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:04.332 01:54:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.332 01:54:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.332 ************************************ 00:07:04.332 START TEST accel_crc32c 00:07:04.332 ************************************ 00:07:04.332 01:54:10 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:04.332 01:54:10 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:04.332 01:54:10 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:04.332 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.332 01:54:10 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:04.332 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.332 01:54:10 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:04.332 01:54:10 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:04.332 01:54:10 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.332 01:54:10 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.332 01:54:10 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.332 01:54:10 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.332 01:54:10 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.332 01:54:10 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:04.332 01:54:10 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:04.332 [2024-07-14 01:54:10.021524] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:04.332 [2024-07-14 01:54:10.021605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465510 ] 00:07:04.592 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.592 [2024-07-14 01:54:10.086135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.592 [2024-07-14 01:54:10.178484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.592 01:54:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:05.968 01:54:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.968 00:07:05.968 real 0m1.406s 00:07:05.968 user 0m1.256s 00:07:05.968 sys 0m0.153s 00:07:05.968 01:54:11 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.968 01:54:11 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:05.968 ************************************ 00:07:05.968 END TEST accel_crc32c 00:07:05.968 ************************************ 00:07:05.968 01:54:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.968 01:54:11 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:05.968 01:54:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:05.968 01:54:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.968 01:54:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.968 ************************************ 00:07:05.968 START TEST accel_crc32c_C2 00:07:05.968 ************************************ 00:07:05.968 01:54:11 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:05.968 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.968 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:05.968 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.968 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:05.968 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.968 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:05.968 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.968 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.968 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.968 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.968 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.968 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.968 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:05.968 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:05.968 [2024-07-14 01:54:11.469931] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:05.968 [2024-07-14 01:54:11.469992] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465673 ] 00:07:05.968 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.968 [2024-07-14 01:54:11.532240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.968 [2024-07-14 01:54:11.625533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.256 01:54:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.193 00:07:07.193 real 0m1.402s 00:07:07.193 user 0m1.260s 00:07:07.193 sys 0m0.145s 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.193 01:54:12 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:07.193 ************************************ 00:07:07.193 END TEST accel_crc32c_C2 00:07:07.193 ************************************ 00:07:07.193 01:54:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.193 01:54:12 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:07.193 01:54:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:07.193 01:54:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.193 01:54:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.452 ************************************ 00:07:07.452 START TEST accel_copy 00:07:07.452 ************************************ 00:07:07.452 01:54:12 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:07.452 01:54:12 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:07.452 01:54:12 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:07.452 01:54:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 01:54:12 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:07.452 01:54:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 01:54:12 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:07.452 01:54:12 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:07.452 01:54:12 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.452 01:54:12 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.452 01:54:12 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.452 01:54:12 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.452 01:54:12 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.452 01:54:12 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:07.452 01:54:12 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:07.452 [2024-07-14 01:54:12.918451] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:07.452 [2024-07-14 01:54:12.918515] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465903 ] 00:07:07.452 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.452 [2024-07-14 01:54:12.983932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.452 [2024-07-14 01:54:13.077232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.715 01:54:13 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:07.715 01:54:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.715 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.715 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.715 01:54:13 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.715 01:54:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.715 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.715 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.715 01:54:13 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:07.715 01:54:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.715 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.715 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.715 01:54:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.716 01:54:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.716 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.716 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.716 01:54:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.716 01:54:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.716 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.716 01:54:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:08.653 01:54:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.653 00:07:08.653 real 0m1.414s 00:07:08.653 user 0m1.257s 00:07:08.653 sys 0m0.158s 00:07:08.653 01:54:14 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.653 01:54:14 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:08.653 ************************************ 00:07:08.653 END TEST accel_copy 00:07:08.653 ************************************ 00:07:08.653 01:54:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.653 01:54:14 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.653 01:54:14 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:08.653 01:54:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.653 01:54:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.911 ************************************ 00:07:08.911 START TEST accel_fill 00:07:08.911 ************************************ 00:07:08.911 01:54:14 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:08.911 [2024-07-14 01:54:14.381704] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:08.911 [2024-07-14 01:54:14.381770] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466102 ] 00:07:08.911 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.911 [2024-07-14 01:54:14.446351] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.911 [2024-07-14 01:54:14.539324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.911 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.170 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.170 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.170 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.170 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.170 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.170 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:09.170 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.170 01:54:14 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:09.170 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.170 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.170 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:09.170 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.171 01:54:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:10.106 01:54:15 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.106 00:07:10.106 real 0m1.408s 00:07:10.106 user 0m1.262s 00:07:10.106 sys 0m0.149s 00:07:10.106 01:54:15 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.106 01:54:15 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:10.106 ************************************ 00:07:10.106 END TEST accel_fill 00:07:10.106 ************************************ 00:07:10.106 01:54:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.106 01:54:15 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:10.106 01:54:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:10.106 01:54:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.106 01:54:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.367 ************************************ 00:07:10.367 START TEST accel_copy_crc32c 00:07:10.367 ************************************ 00:07:10.367 01:54:15 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:10.367 01:54:15 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:10.367 01:54:15 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:10.367 01:54:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.367 01:54:15 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:10.367 01:54:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.367 01:54:15 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:10.367 01:54:15 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:10.367 01:54:15 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.367 01:54:15 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.367 01:54:15 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.367 01:54:15 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.367 01:54:15 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.367 01:54:15 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:10.367 01:54:15 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:10.367 [2024-07-14 01:54:15.832210] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:10.367 [2024-07-14 01:54:15.832270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466258 ] 00:07:10.367 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.367 [2024-07-14 01:54:15.894260] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.367 [2024-07-14 01:54:15.987455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.367 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.368 01:54:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.743 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.743 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.743 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.743 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.743 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.743 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.743 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.743 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.743 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.743 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.743 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.743 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.743 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.743 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.743 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.743 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.744 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.744 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.744 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.744 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.744 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.744 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.744 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.744 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.744 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.744 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:11.744 01:54:17 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.744 00:07:11.744 real 0m1.409s 00:07:11.744 user 0m1.273s 00:07:11.744 sys 0m0.138s 00:07:11.744 01:54:17 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.744 01:54:17 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:11.744 ************************************ 00:07:11.744 END TEST accel_copy_crc32c 00:07:11.744 ************************************ 00:07:11.744 01:54:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.744 01:54:17 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:11.744 01:54:17 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:11.744 01:54:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.744 01:54:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.744 ************************************ 00:07:11.744 START TEST accel_copy_crc32c_C2 00:07:11.744 ************************************ 00:07:11.744 01:54:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:11.744 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.744 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:11.744 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.744 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:11.744 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.744 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:11.744 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.744 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.744 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.744 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.744 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.744 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.744 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:11.744 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:11.744 [2024-07-14 01:54:17.294567] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:11.744 [2024-07-14 01:54:17.294636] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466410 ] 00:07:11.744 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.744 [2024-07-14 01:54:17.358562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.003 [2024-07-14 01:54:17.454963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 01:54:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.385 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.385 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.385 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.385 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.385 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.386 00:07:13.386 real 0m1.418s 00:07:13.386 user 0m1.278s 00:07:13.386 sys 0m0.141s 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.386 01:54:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:13.386 ************************************ 00:07:13.386 END TEST accel_copy_crc32c_C2 00:07:13.386 ************************************ 00:07:13.386 01:54:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.386 01:54:18 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:13.386 01:54:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:13.386 01:54:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.386 01:54:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.386 ************************************ 00:07:13.386 START TEST accel_dualcast 00:07:13.386 ************************************ 00:07:13.386 01:54:18 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:13.386 [2024-07-14 01:54:18.763799] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:13.386 [2024-07-14 01:54:18.763882] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466684 ] 00:07:13.386 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.386 [2024-07-14 01:54:18.827360] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.386 [2024-07-14 01:54:18.920829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.386 01:54:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:14.767 01:54:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.767 00:07:14.767 real 0m1.391s 00:07:14.767 user 0m1.254s 00:07:14.767 sys 0m0.137s 00:07:14.767 01:54:20 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.767 01:54:20 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:14.767 ************************************ 00:07:14.767 END TEST accel_dualcast 00:07:14.767 ************************************ 00:07:14.767 01:54:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.767 01:54:20 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:14.767 01:54:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:14.767 01:54:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.767 01:54:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.767 ************************************ 00:07:14.767 START TEST accel_compare 00:07:14.767 ************************************ 00:07:14.767 01:54:20 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:14.767 [2024-07-14 01:54:20.194686] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:14.767 [2024-07-14 01:54:20.194751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466840 ] 00:07:14.767 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.767 [2024-07-14 01:54:20.256696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.767 [2024-07-14 01:54:20.348995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.767 01:54:20 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:14.768 01:54:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.768 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.768 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.768 01:54:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.768 01:54:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.768 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.768 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.768 01:54:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.768 01:54:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.768 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.768 01:54:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:16.150 01:54:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.150 00:07:16.150 real 0m1.394s 00:07:16.150 user 0m1.260s 00:07:16.150 sys 0m0.136s 00:07:16.150 01:54:21 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.150 01:54:21 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:16.150 ************************************ 00:07:16.150 END TEST accel_compare 00:07:16.150 ************************************ 00:07:16.150 01:54:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.150 01:54:21 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:16.150 01:54:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:16.150 01:54:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.150 01:54:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.150 ************************************ 00:07:16.150 START TEST accel_xor 00:07:16.150 ************************************ 00:07:16.150 01:54:21 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:16.150 01:54:21 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:16.150 01:54:21 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:16.150 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.150 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.150 01:54:21 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:16.150 01:54:21 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:16.150 01:54:21 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:16.150 01:54:21 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.150 01:54:21 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.150 01:54:21 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.150 01:54:21 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.150 01:54:21 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.150 01:54:21 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:16.150 01:54:21 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:16.150 [2024-07-14 01:54:21.629585] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:16.150 [2024-07-14 01:54:21.629639] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467004 ] 00:07:16.150 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.150 [2024-07-14 01:54:21.691006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.150 [2024-07-14 01:54:21.784360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.499 01:54:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.441 00:07:17.441 real 0m1.403s 00:07:17.441 user 0m1.262s 00:07:17.441 sys 0m0.142s 00:07:17.441 01:54:23 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.441 01:54:23 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:17.441 ************************************ 00:07:17.441 END TEST accel_xor 00:07:17.441 ************************************ 00:07:17.441 01:54:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.441 01:54:23 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:17.441 01:54:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:17.441 01:54:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.441 01:54:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.441 ************************************ 00:07:17.441 START TEST accel_xor 00:07:17.441 ************************************ 00:07:17.441 01:54:23 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:17.441 01:54:23 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:17.441 [2024-07-14 01:54:23.077755] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:17.441 [2024-07-14 01:54:23.077818] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467173 ] 00:07:17.441 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.700 [2024-07-14 01:54:23.140254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.700 [2024-07-14 01:54:23.234546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.700 01:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:19.081 01:54:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.081 00:07:19.081 real 0m1.407s 00:07:19.081 user 0m1.265s 00:07:19.081 sys 0m0.143s 00:07:19.081 01:54:24 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.081 01:54:24 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:19.081 ************************************ 00:07:19.081 END TEST accel_xor 00:07:19.081 ************************************ 00:07:19.081 01:54:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.081 01:54:24 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:19.081 01:54:24 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:19.081 01:54:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.081 01:54:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.081 ************************************ 00:07:19.081 START TEST accel_dif_verify 00:07:19.081 ************************************ 00:07:19.082 01:54:24 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:19.082 [2024-07-14 01:54:24.527881] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:19.082 [2024-07-14 01:54:24.527957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467433 ] 00:07:19.082 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.082 [2024-07-14 01:54:24.592578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.082 [2024-07-14 01:54:24.682859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 01:54:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:20.462 01:54:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.462 00:07:20.462 real 0m1.398s 00:07:20.462 user 0m1.267s 00:07:20.462 sys 0m0.133s 00:07:20.463 01:54:25 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.463 01:54:25 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:20.463 ************************************ 00:07:20.463 END TEST accel_dif_verify 00:07:20.463 ************************************ 00:07:20.463 01:54:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.463 01:54:25 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:20.463 01:54:25 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:20.463 01:54:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.463 01:54:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.463 ************************************ 00:07:20.463 START TEST accel_dif_generate 00:07:20.463 ************************************ 00:07:20.463 01:54:25 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:20.463 01:54:25 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:20.463 01:54:25 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:20.463 01:54:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.463 01:54:25 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:20.463 01:54:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.463 01:54:25 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:20.463 01:54:25 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:20.463 01:54:25 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.463 01:54:25 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.463 01:54:25 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.463 01:54:25 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.463 01:54:25 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.463 01:54:25 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:20.463 01:54:25 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:20.463 [2024-07-14 01:54:25.969528] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:20.463 [2024-07-14 01:54:25.969591] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467589 ] 00:07:20.463 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.463 [2024-07-14 01:54:26.031337] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.463 [2024-07-14 01:54:26.123546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.723 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.724 01:54:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:21.658 01:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:21.917 01:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:21.917 01:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:21.917 01:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:21.918 01:54:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.918 01:54:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:21.918 01:54:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.918 00:07:21.918 real 0m1.397s 00:07:21.918 user 0m1.259s 00:07:21.918 sys 0m0.142s 00:07:21.918 01:54:27 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.918 01:54:27 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:21.918 ************************************ 00:07:21.918 END TEST accel_dif_generate 00:07:21.918 ************************************ 00:07:21.918 01:54:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.918 01:54:27 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:21.918 01:54:27 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:21.918 01:54:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.918 01:54:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.918 ************************************ 00:07:21.918 START TEST accel_dif_generate_copy 00:07:21.918 ************************************ 00:07:21.918 01:54:27 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:21.918 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:21.918 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:21.918 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.918 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:21.918 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.918 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:21.918 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:21.918 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.918 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.918 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.918 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.918 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.918 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:21.918 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:21.918 [2024-07-14 01:54:27.410851] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:21.918 [2024-07-14 01:54:27.411003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467747 ] 00:07:21.918 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.918 [2024-07-14 01:54:27.472338] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.918 [2024-07-14 01:54:27.566547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.177 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.178 01:54:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.113 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.113 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.114 00:07:23.114 real 0m1.404s 00:07:23.114 user 0m1.255s 00:07:23.114 sys 0m0.150s 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.114 01:54:28 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:23.114 ************************************ 00:07:23.114 END TEST accel_dif_generate_copy 00:07:23.114 ************************************ 00:07:23.373 01:54:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.373 01:54:28 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:23.373 01:54:28 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.373 01:54:28 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:23.373 01:54:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.373 01:54:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.373 ************************************ 00:07:23.373 START TEST accel_comp 00:07:23.373 ************************************ 00:07:23.373 01:54:28 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.373 01:54:28 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:23.373 01:54:28 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:23.373 01:54:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.373 01:54:28 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.373 01:54:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.373 01:54:28 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.373 01:54:28 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:23.373 01:54:28 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.373 01:54:28 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.373 01:54:28 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.373 01:54:28 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.373 01:54:28 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.373 01:54:28 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:23.373 01:54:28 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:23.373 [2024-07-14 01:54:28.861754] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:23.373 [2024-07-14 01:54:28.861819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468014 ] 00:07:23.373 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.373 [2024-07-14 01:54:28.925983] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.373 [2024-07-14 01:54:29.018832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.632 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.633 01:54:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.567 01:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.567 01:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.567 01:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.567 01:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.567 01:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:24.826 01:54:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.826 00:07:24.826 real 0m1.418s 00:07:24.826 user 0m1.273s 00:07:24.826 sys 0m0.149s 00:07:24.826 01:54:30 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.826 01:54:30 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:24.826 ************************************ 00:07:24.826 END TEST accel_comp 00:07:24.826 ************************************ 00:07:24.826 01:54:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.826 01:54:30 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:24.826 01:54:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:24.826 01:54:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.826 01:54:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.826 ************************************ 00:07:24.826 START TEST accel_decomp 00:07:24.826 ************************************ 00:07:24.826 01:54:30 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:24.826 01:54:30 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:24.826 01:54:30 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:24.826 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.826 01:54:30 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:24.826 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.826 01:54:30 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:24.826 01:54:30 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:24.826 01:54:30 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.826 01:54:30 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.826 01:54:30 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.826 01:54:30 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.826 01:54:30 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.826 01:54:30 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:24.826 01:54:30 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:24.826 [2024-07-14 01:54:30.324456] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:24.826 [2024-07-14 01:54:30.324523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468172 ] 00:07:24.826 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.826 [2024-07-14 01:54:30.387165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.826 [2024-07-14 01:54:30.478500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.084 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.084 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.084 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.084 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.084 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.084 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.084 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.084 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.084 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.084 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.084 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.084 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.084 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:25.084 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.084 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.085 01:54:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.458 01:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.458 01:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.458 01:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:26.459 01:54:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.459 00:07:26.459 real 0m1.407s 00:07:26.459 user 0m1.265s 00:07:26.459 sys 0m0.146s 00:07:26.459 01:54:31 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.459 01:54:31 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:26.459 ************************************ 00:07:26.459 END TEST accel_decomp 00:07:26.459 ************************************ 00:07:26.459 01:54:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.459 01:54:31 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:26.459 01:54:31 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:26.459 01:54:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.459 01:54:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.459 ************************************ 00:07:26.459 START TEST accel_decomp_full 00:07:26.459 ************************************ 00:07:26.459 01:54:31 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:26.459 [2024-07-14 01:54:31.775002] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:26.459 [2024-07-14 01:54:31.775068] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468336 ] 00:07:26.459 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.459 [2024-07-14 01:54:31.836987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.459 [2024-07-14 01:54:31.928572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.459 01:54:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.459 01:54:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.459 01:54:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.460 01:54:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.460 01:54:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.460 01:54:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:27.831 01:54:33 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.831 00:07:27.831 real 0m1.423s 00:07:27.831 user 0m1.282s 00:07:27.831 sys 0m0.144s 00:07:27.831 01:54:33 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.831 01:54:33 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:27.831 ************************************ 00:07:27.831 END TEST accel_decomp_full 00:07:27.831 ************************************ 00:07:27.831 01:54:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.831 01:54:33 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.831 01:54:33 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:27.831 01:54:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.831 01:54:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.831 ************************************ 00:07:27.831 START TEST accel_decomp_mcore 00:07:27.831 ************************************ 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:27.831 [2024-07-14 01:54:33.251167] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:27.831 [2024-07-14 01:54:33.251232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468492 ] 00:07:27.831 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.831 [2024-07-14 01:54:33.314699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.831 [2024-07-14 01:54:33.409829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.831 [2024-07-14 01:54:33.409916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.831 [2024-07-14 01:54:33.409941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.831 [2024-07-14 01:54:33.409943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.831 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.832 01:54:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.204 00:07:29.204 real 0m1.415s 00:07:29.204 user 0m4.704s 00:07:29.204 sys 0m0.157s 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.204 01:54:34 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:29.204 ************************************ 00:07:29.204 END TEST accel_decomp_mcore 00:07:29.204 ************************************ 00:07:29.204 01:54:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.204 01:54:34 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.204 01:54:34 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:29.204 01:54:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.204 01:54:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.204 ************************************ 00:07:29.204 START TEST accel_decomp_full_mcore 00:07:29.204 ************************************ 00:07:29.204 01:54:34 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.204 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:29.204 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:29.204 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.204 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.204 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.204 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.204 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:29.204 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.204 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.204 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.204 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.204 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.204 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:29.205 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:29.205 [2024-07-14 01:54:34.718263] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:29.205 [2024-07-14 01:54:34.718331] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468767 ] 00:07:29.205 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.205 [2024-07-14 01:54:34.782578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.205 [2024-07-14 01:54:34.877470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.205 [2024-07-14 01:54:34.877544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.205 [2024-07-14 01:54:34.877635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.205 [2024-07-14 01:54:34.877637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.464 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.464 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.464 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.464 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.464 01:54:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.838 00:07:30.838 real 0m1.430s 00:07:30.838 user 0m4.761s 00:07:30.838 sys 0m0.154s 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.838 01:54:36 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:30.838 ************************************ 00:07:30.838 END TEST accel_decomp_full_mcore 00:07:30.838 ************************************ 00:07:30.838 01:54:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.838 01:54:36 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.838 01:54:36 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:30.838 01:54:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.838 01:54:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.838 ************************************ 00:07:30.838 START TEST accel_decomp_mthread 00:07:30.838 ************************************ 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:30.838 [2024-07-14 01:54:36.194036] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:30.838 [2024-07-14 01:54:36.194100] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468927 ] 00:07:30.838 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.838 [2024-07-14 01:54:36.255994] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.838 [2024-07-14 01:54:36.348570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.838 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.839 01:54:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.243 00:07:32.243 real 0m1.403s 00:07:32.243 user 0m1.256s 00:07:32.243 sys 0m0.151s 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.243 01:54:37 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:32.244 ************************************ 00:07:32.244 END TEST accel_decomp_mthread 00:07:32.244 ************************************ 00:07:32.244 01:54:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.244 01:54:37 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.244 01:54:37 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:32.244 01:54:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.244 01:54:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.244 ************************************ 00:07:32.244 START TEST accel_decomp_full_mthread 00:07:32.244 ************************************ 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:32.244 [2024-07-14 01:54:37.643562] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:32.244 [2024-07-14 01:54:37.643626] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469090 ] 00:07:32.244 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.244 [2024-07-14 01:54:37.707629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.244 [2024-07-14 01:54:37.800897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.244 01:54:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.618 00:07:33.618 real 0m1.436s 00:07:33.618 user 0m1.293s 00:07:33.618 sys 0m0.146s 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.618 01:54:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:33.618 ************************************ 00:07:33.618 END TEST accel_decomp_full_mthread 00:07:33.618 ************************************ 00:07:33.618 01:54:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.618 01:54:39 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:33.618 01:54:39 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:33.618 01:54:39 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:33.618 01:54:39 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:33.618 01:54:39 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.618 01:54:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.618 01:54:39 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.618 01:54:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.618 01:54:39 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.618 01:54:39 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.618 01:54:39 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.618 01:54:39 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:33.618 01:54:39 accel -- accel/accel.sh@41 -- # jq -r . 00:07:33.618 ************************************ 00:07:33.618 START TEST accel_dif_functional_tests 00:07:33.618 ************************************ 00:07:33.618 01:54:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:33.618 [2024-07-14 01:54:39.150238] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:33.618 [2024-07-14 01:54:39.150315] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469362 ] 00:07:33.618 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.618 [2024-07-14 01:54:39.211567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.618 [2024-07-14 01:54:39.306532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.618 [2024-07-14 01:54:39.306595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.618 [2024-07-14 01:54:39.306598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.877 00:07:33.877 00:07:33.877 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.877 http://cunit.sourceforge.net/ 00:07:33.877 00:07:33.877 00:07:33.877 Suite: accel_dif 00:07:33.877 Test: verify: DIF generated, GUARD check ...passed 00:07:33.877 Test: verify: DIF generated, APPTAG check ...passed 00:07:33.877 Test: verify: DIF generated, REFTAG check ...passed 00:07:33.877 Test: verify: DIF not generated, GUARD check ...[2024-07-14 01:54:39.400173] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:33.877 passed 00:07:33.877 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 01:54:39.400262] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:33.877 passed 00:07:33.877 Test: verify: DIF not generated, REFTAG check ...[2024-07-14 01:54:39.400304] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:33.877 passed 00:07:33.877 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:33.877 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-14 01:54:39.400378] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:33.877 passed 00:07:33.877 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:33.877 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:33.877 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:33.877 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-14 01:54:39.400528] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:33.877 passed 00:07:33.877 Test: verify copy: DIF generated, GUARD check ...passed 00:07:33.877 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:33.877 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:33.877 Test: verify copy: DIF not generated, GUARD check ...[2024-07-14 01:54:39.400711] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:33.877 passed 00:07:33.877 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-14 01:54:39.400752] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:33.877 passed 00:07:33.877 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-14 01:54:39.400797] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:33.877 passed 00:07:33.877 Test: generate copy: DIF generated, GUARD check ...passed 00:07:33.877 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:33.877 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:33.877 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:33.877 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:33.877 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:33.877 Test: generate copy: iovecs-len validate ...[2024-07-14 01:54:39.401052] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:33.877 passed 00:07:33.877 Test: generate copy: buffer alignment validate ...passed 00:07:33.877 00:07:33.877 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.877 suites 1 1 n/a 0 0 00:07:33.877 tests 26 26 26 0 0 00:07:33.877 asserts 115 115 115 0 n/a 00:07:33.877 00:07:33.877 Elapsed time = 0.002 seconds 00:07:34.135 00:07:34.135 real 0m0.497s 00:07:34.135 user 0m0.775s 00:07:34.135 sys 0m0.179s 00:07:34.135 01:54:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.135 01:54:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:34.135 ************************************ 00:07:34.135 END TEST accel_dif_functional_tests 00:07:34.135 ************************************ 00:07:34.135 01:54:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.135 00:07:34.135 real 0m31.691s 00:07:34.135 user 0m35.096s 00:07:34.135 sys 0m4.571s 00:07:34.135 01:54:39 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.135 01:54:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.135 ************************************ 00:07:34.135 END TEST accel 00:07:34.135 ************************************ 00:07:34.135 01:54:39 -- common/autotest_common.sh@1142 -- # return 0 00:07:34.135 01:54:39 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:34.135 01:54:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.135 01:54:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.135 01:54:39 -- common/autotest_common.sh@10 -- # set +x 00:07:34.135 ************************************ 00:07:34.135 START TEST accel_rpc 00:07:34.135 ************************************ 00:07:34.135 01:54:39 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:34.135 * Looking for test storage... 00:07:34.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:34.135 01:54:39 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:34.135 01:54:39 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1469434 00:07:34.135 01:54:39 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:34.135 01:54:39 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1469434 00:07:34.135 01:54:39 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1469434 ']' 00:07:34.135 01:54:39 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.135 01:54:39 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.135 01:54:39 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.135 01:54:39 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.135 01:54:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.135 [2024-07-14 01:54:39.780402] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:34.135 [2024-07-14 01:54:39.780482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469434 ] 00:07:34.135 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.393 [2024-07-14 01:54:39.840654] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.393 [2024-07-14 01:54:39.925251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.393 01:54:39 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.393 01:54:39 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:34.393 01:54:39 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:34.393 01:54:39 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:34.393 01:54:39 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:34.393 01:54:39 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:34.393 01:54:39 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:34.393 01:54:39 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.393 01:54:39 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.393 01:54:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.393 ************************************ 00:07:34.393 START TEST accel_assign_opcode 00:07:34.393 ************************************ 00:07:34.393 01:54:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:34.393 01:54:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:34.393 01:54:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.393 01:54:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:34.393 [2024-07-14 01:54:40.013960] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:34.393 01:54:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.393 01:54:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:34.393 01:54:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.393 01:54:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:34.393 [2024-07-14 01:54:40.021968] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:34.393 01:54:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.393 01:54:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:34.393 01:54:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.393 01:54:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:34.651 01:54:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.651 01:54:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:34.651 01:54:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.651 01:54:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:34.651 01:54:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:34.651 01:54:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:34.651 01:54:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.651 software 00:07:34.651 00:07:34.651 real 0m0.290s 00:07:34.651 user 0m0.039s 00:07:34.651 sys 0m0.007s 00:07:34.651 01:54:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.651 01:54:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:34.651 ************************************ 00:07:34.651 END TEST accel_assign_opcode 00:07:34.651 ************************************ 00:07:34.651 01:54:40 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:34.651 01:54:40 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1469434 00:07:34.651 01:54:40 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1469434 ']' 00:07:34.651 01:54:40 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1469434 00:07:34.651 01:54:40 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:34.651 01:54:40 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:34.651 01:54:40 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1469434 00:07:34.909 01:54:40 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:34.909 01:54:40 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:34.909 01:54:40 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1469434' 00:07:34.909 killing process with pid 1469434 00:07:34.909 01:54:40 accel_rpc -- common/autotest_common.sh@967 -- # kill 1469434 00:07:34.909 01:54:40 accel_rpc -- common/autotest_common.sh@972 -- # wait 1469434 00:07:35.166 00:07:35.166 real 0m1.086s 00:07:35.166 user 0m1.020s 00:07:35.166 sys 0m0.427s 00:07:35.166 01:54:40 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.166 01:54:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.166 ************************************ 00:07:35.166 END TEST accel_rpc 00:07:35.166 ************************************ 00:07:35.166 01:54:40 -- common/autotest_common.sh@1142 -- # return 0 00:07:35.166 01:54:40 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:35.166 01:54:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.166 01:54:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.166 01:54:40 -- common/autotest_common.sh@10 -- # set +x 00:07:35.166 ************************************ 00:07:35.166 START TEST app_cmdline 00:07:35.166 ************************************ 00:07:35.167 01:54:40 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:35.424 * Looking for test storage... 00:07:35.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:35.424 01:54:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:35.424 01:54:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1469638 00:07:35.424 01:54:40 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:35.424 01:54:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1469638 00:07:35.424 01:54:40 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1469638 ']' 00:07:35.424 01:54:40 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.424 01:54:40 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.424 01:54:40 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.424 01:54:40 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.424 01:54:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:35.424 [2024-07-14 01:54:40.916815] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:35.424 [2024-07-14 01:54:40.916935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469638 ] 00:07:35.424 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.424 [2024-07-14 01:54:40.976745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.424 [2024-07-14 01:54:41.061272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.682 01:54:41 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.682 01:54:41 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:35.682 01:54:41 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:35.939 { 00:07:35.939 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:07:35.939 "fields": { 00:07:35.939 "major": 24, 00:07:35.939 "minor": 9, 00:07:35.939 "patch": 0, 00:07:35.939 "suffix": "-pre", 00:07:35.939 "commit": "719d03c6a" 00:07:35.939 } 00:07:35.939 } 00:07:35.939 01:54:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:35.939 01:54:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:35.939 01:54:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:35.939 01:54:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:35.939 01:54:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:35.939 01:54:41 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.940 01:54:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:35.940 01:54:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:35.940 01:54:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:35.940 01:54:41 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.940 01:54:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:35.940 01:54:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:35.940 01:54:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:35.940 01:54:41 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:35.940 01:54:41 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:35.940 01:54:41 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.940 01:54:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.940 01:54:41 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.940 01:54:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.940 01:54:41 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.940 01:54:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.940 01:54:41 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.940 01:54:41 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:35.940 01:54:41 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:36.198 request: 00:07:36.198 { 00:07:36.198 "method": "env_dpdk_get_mem_stats", 00:07:36.198 "req_id": 1 00:07:36.198 } 00:07:36.198 Got JSON-RPC error response 00:07:36.198 response: 00:07:36.198 { 00:07:36.198 "code": -32601, 00:07:36.198 "message": "Method not found" 00:07:36.198 } 00:07:36.198 01:54:41 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:36.198 01:54:41 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:36.198 01:54:41 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:36.198 01:54:41 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:36.198 01:54:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1469638 00:07:36.198 01:54:41 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1469638 ']' 00:07:36.198 01:54:41 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1469638 00:07:36.198 01:54:41 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:36.198 01:54:41 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:36.198 01:54:41 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1469638 00:07:36.198 01:54:41 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:36.198 01:54:41 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:36.198 01:54:41 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1469638' 00:07:36.198 killing process with pid 1469638 00:07:36.198 01:54:41 app_cmdline -- common/autotest_common.sh@967 -- # kill 1469638 00:07:36.198 01:54:41 app_cmdline -- common/autotest_common.sh@972 -- # wait 1469638 00:07:36.764 00:07:36.764 real 0m1.462s 00:07:36.764 user 0m1.784s 00:07:36.764 sys 0m0.457s 00:07:36.764 01:54:42 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.764 01:54:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:36.764 ************************************ 00:07:36.764 END TEST app_cmdline 00:07:36.764 ************************************ 00:07:36.764 01:54:42 -- common/autotest_common.sh@1142 -- # return 0 00:07:36.764 01:54:42 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:36.764 01:54:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.764 01:54:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.764 01:54:42 -- common/autotest_common.sh@10 -- # set +x 00:07:36.764 ************************************ 00:07:36.764 START TEST version 00:07:36.764 ************************************ 00:07:36.764 01:54:42 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:36.764 * Looking for test storage... 00:07:36.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:36.764 01:54:42 version -- app/version.sh@17 -- # get_header_version major 00:07:36.764 01:54:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:36.764 01:54:42 version -- app/version.sh@14 -- # cut -f2 00:07:36.764 01:54:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:36.764 01:54:42 version -- app/version.sh@17 -- # major=24 00:07:36.764 01:54:42 version -- app/version.sh@18 -- # get_header_version minor 00:07:36.764 01:54:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:36.764 01:54:42 version -- app/version.sh@14 -- # cut -f2 00:07:36.764 01:54:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:36.764 01:54:42 version -- app/version.sh@18 -- # minor=9 00:07:36.764 01:54:42 version -- app/version.sh@19 -- # get_header_version patch 00:07:36.765 01:54:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:36.765 01:54:42 version -- app/version.sh@14 -- # cut -f2 00:07:36.765 01:54:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:36.765 01:54:42 version -- app/version.sh@19 -- # patch=0 00:07:36.765 01:54:42 version -- app/version.sh@20 -- # get_header_version suffix 00:07:36.765 01:54:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:36.765 01:54:42 version -- app/version.sh@14 -- # cut -f2 00:07:36.765 01:54:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:36.765 01:54:42 version -- app/version.sh@20 -- # suffix=-pre 00:07:36.765 01:54:42 version -- app/version.sh@22 -- # version=24.9 00:07:36.765 01:54:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:36.765 01:54:42 version -- app/version.sh@28 -- # version=24.9rc0 00:07:36.765 01:54:42 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:36.765 01:54:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:36.765 01:54:42 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:36.765 01:54:42 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:36.765 00:07:36.765 real 0m0.109s 00:07:36.765 user 0m0.056s 00:07:36.765 sys 0m0.075s 00:07:36.765 01:54:42 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.765 01:54:42 version -- common/autotest_common.sh@10 -- # set +x 00:07:36.765 ************************************ 00:07:36.765 END TEST version 00:07:36.765 ************************************ 00:07:36.765 01:54:42 -- common/autotest_common.sh@1142 -- # return 0 00:07:36.765 01:54:42 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:36.765 01:54:42 -- spdk/autotest.sh@198 -- # uname -s 00:07:37.024 01:54:42 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:37.024 01:54:42 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:37.024 01:54:42 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:37.024 01:54:42 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:37.024 01:54:42 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:37.024 01:54:42 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:37.024 01:54:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:37.024 01:54:42 -- common/autotest_common.sh@10 -- # set +x 00:07:37.024 01:54:42 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:37.024 01:54:42 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:37.024 01:54:42 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:37.024 01:54:42 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:37.024 01:54:42 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:37.024 01:54:42 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:37.024 01:54:42 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:37.024 01:54:42 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:37.024 01:54:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.024 01:54:42 -- common/autotest_common.sh@10 -- # set +x 00:07:37.024 ************************************ 00:07:37.024 START TEST nvmf_tcp 00:07:37.024 ************************************ 00:07:37.024 01:54:42 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:37.024 * Looking for test storage... 00:07:37.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.024 01:54:42 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.024 01:54:42 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.024 01:54:42 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.024 01:54:42 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.024 01:54:42 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.024 01:54:42 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.024 01:54:42 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:37.024 01:54:42 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:37.024 01:54:42 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:37.024 01:54:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:37.024 01:54:42 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:37.024 01:54:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:37.024 01:54:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.024 01:54:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.024 ************************************ 00:07:37.024 START TEST nvmf_example 00:07:37.024 ************************************ 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:37.024 * Looking for test storage... 00:07:37.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.024 01:54:42 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:37.025 01:54:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:38.926 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.926 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:38.927 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:38.927 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:38.927 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:38.927 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:39.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:07:39.185 00:07:39.185 --- 10.0.0.2 ping statistics --- 00:07:39.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.185 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:07:39.185 00:07:39.185 --- 10.0.0.1 ping statistics --- 00:07:39.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.185 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1471657 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:39.185 01:54:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1471657 00:07:39.186 01:54:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1471657 ']' 00:07:39.186 01:54:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.186 01:54:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.186 01:54:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.186 01:54:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.186 01:54:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:39.186 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.119 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.119 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:40.119 01:54:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:40.119 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:40.119 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:40.119 01:54:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:40.119 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.119 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:40.377 01:54:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:40.377 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.349 Initializing NVMe Controllers 00:07:50.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:50.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:50.349 Initialization complete. Launching workers. 00:07:50.349 ======================================================== 00:07:50.349 Latency(us) 00:07:50.349 Device Information : IOPS MiB/s Average min max 00:07:50.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14789.84 57.77 4326.99 866.37 16125.26 00:07:50.349 ======================================================== 00:07:50.349 Total : 14789.84 57.77 4326.99 866.37 16125.26 00:07:50.349 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:50.607 rmmod nvme_tcp 00:07:50.607 rmmod nvme_fabrics 00:07:50.607 rmmod nvme_keyring 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1471657 ']' 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1471657 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1471657 ']' 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1471657 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1471657 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:50.607 01:54:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:50.608 01:54:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1471657' 00:07:50.608 killing process with pid 1471657 00:07:50.608 01:54:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1471657 00:07:50.608 01:54:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1471657 00:07:50.867 nvmf threads initialize successfully 00:07:50.867 bdev subsystem init successfully 00:07:50.867 created a nvmf target service 00:07:50.867 create targets's poll groups done 00:07:50.867 all subsystems of target started 00:07:50.867 nvmf target is running 00:07:50.867 all subsystems of target stopped 00:07:50.867 destroy targets's poll groups done 00:07:50.867 destroyed the nvmf target service 00:07:50.867 bdev subsystem finish successfully 00:07:50.867 nvmf threads destroy successfully 00:07:50.867 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:50.867 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:50.867 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:50.867 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:50.867 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:50.867 01:54:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.867 01:54:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.867 01:54:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.767 01:54:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:52.767 01:54:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:52.767 01:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:52.767 01:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:52.767 00:07:52.767 real 0m15.865s 00:07:52.767 user 0m45.244s 00:07:52.767 sys 0m3.175s 00:07:52.767 01:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.767 01:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:52.767 ************************************ 00:07:52.767 END TEST nvmf_example 00:07:52.767 ************************************ 00:07:53.030 01:54:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:53.030 01:54:58 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:53.030 01:54:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:53.030 01:54:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.030 01:54:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.030 ************************************ 00:07:53.030 START TEST nvmf_filesystem 00:07:53.030 ************************************ 00:07:53.030 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:53.030 * Looking for test storage... 00:07:53.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.030 01:54:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:53.030 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:53.031 01:54:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:53.031 #define SPDK_CONFIG_H 00:07:53.031 #define SPDK_CONFIG_APPS 1 00:07:53.031 #define SPDK_CONFIG_ARCH native 00:07:53.031 #undef SPDK_CONFIG_ASAN 00:07:53.031 #undef SPDK_CONFIG_AVAHI 00:07:53.031 #undef SPDK_CONFIG_CET 00:07:53.031 #define SPDK_CONFIG_COVERAGE 1 00:07:53.031 #define SPDK_CONFIG_CROSS_PREFIX 00:07:53.031 #undef SPDK_CONFIG_CRYPTO 00:07:53.031 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:53.031 #undef SPDK_CONFIG_CUSTOMOCF 00:07:53.031 #undef SPDK_CONFIG_DAOS 00:07:53.032 #define SPDK_CONFIG_DAOS_DIR 00:07:53.032 #define SPDK_CONFIG_DEBUG 1 00:07:53.032 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:53.032 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:53.032 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:53.032 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:53.032 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:53.032 #undef SPDK_CONFIG_DPDK_UADK 00:07:53.032 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:53.032 #define SPDK_CONFIG_EXAMPLES 1 00:07:53.032 #undef SPDK_CONFIG_FC 00:07:53.032 #define SPDK_CONFIG_FC_PATH 00:07:53.032 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:53.032 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:53.032 #undef SPDK_CONFIG_FUSE 00:07:53.032 #undef SPDK_CONFIG_FUZZER 00:07:53.032 #define SPDK_CONFIG_FUZZER_LIB 00:07:53.032 #undef SPDK_CONFIG_GOLANG 00:07:53.032 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:53.032 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:53.032 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:53.032 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:53.032 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:53.032 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:53.032 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:53.032 #define SPDK_CONFIG_IDXD 1 00:07:53.032 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:53.032 #undef SPDK_CONFIG_IPSEC_MB 00:07:53.032 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:53.032 #define SPDK_CONFIG_ISAL 1 00:07:53.032 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:53.032 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:53.032 #define SPDK_CONFIG_LIBDIR 00:07:53.032 #undef SPDK_CONFIG_LTO 00:07:53.032 #define SPDK_CONFIG_MAX_LCORES 128 00:07:53.032 #define SPDK_CONFIG_NVME_CUSE 1 00:07:53.032 #undef SPDK_CONFIG_OCF 00:07:53.032 #define SPDK_CONFIG_OCF_PATH 00:07:53.032 #define SPDK_CONFIG_OPENSSL_PATH 00:07:53.032 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:53.032 #define SPDK_CONFIG_PGO_DIR 00:07:53.032 #undef SPDK_CONFIG_PGO_USE 00:07:53.032 #define SPDK_CONFIG_PREFIX /usr/local 00:07:53.032 #undef SPDK_CONFIG_RAID5F 00:07:53.032 #undef SPDK_CONFIG_RBD 00:07:53.032 #define SPDK_CONFIG_RDMA 1 00:07:53.032 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:53.032 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:53.032 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:53.032 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:53.032 #define SPDK_CONFIG_SHARED 1 00:07:53.032 #undef SPDK_CONFIG_SMA 00:07:53.032 #define SPDK_CONFIG_TESTS 1 00:07:53.032 #undef SPDK_CONFIG_TSAN 00:07:53.032 #define SPDK_CONFIG_UBLK 1 00:07:53.032 #define SPDK_CONFIG_UBSAN 1 00:07:53.032 #undef SPDK_CONFIG_UNIT_TESTS 00:07:53.032 #undef SPDK_CONFIG_URING 00:07:53.032 #define SPDK_CONFIG_URING_PATH 00:07:53.032 #undef SPDK_CONFIG_URING_ZNS 00:07:53.032 #undef SPDK_CONFIG_USDT 00:07:53.032 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:53.032 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:53.032 #define SPDK_CONFIG_VFIO_USER 1 00:07:53.032 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:53.032 #define SPDK_CONFIG_VHOST 1 00:07:53.032 #define SPDK_CONFIG_VIRTIO 1 00:07:53.032 #undef SPDK_CONFIG_VTUNE 00:07:53.032 #define SPDK_CONFIG_VTUNE_DIR 00:07:53.032 #define SPDK_CONFIG_WERROR 1 00:07:53.032 #define SPDK_CONFIG_WPDK_DIR 00:07:53.032 #undef SPDK_CONFIG_XNVME 00:07:53.032 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:53.032 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v23.11 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:53.033 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1473360 ]] 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1473360 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.zgjQl6 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.zgjQl6/tests/target /tmp/spdk.zgjQl6 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=52911476736 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994708992 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9083232256 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941716480 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997352448 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:07:53.034 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996197376 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1159168 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:53.035 * Looking for test storage... 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=52911476736 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=11297824768 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:53.035 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.036 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.037 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:53.037 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:53.037 01:54:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:53.037 01:54:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:54.987 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:54.987 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.987 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:54.988 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:54.988 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.988 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:55.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:07:55.246 00:07:55.246 --- 10.0.0.2 ping statistics --- 00:07:55.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.246 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:55.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:07:55.246 00:07:55.246 --- 10.0.0.1 ping statistics --- 00:07:55.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.246 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:55.246 ************************************ 00:07:55.246 START TEST nvmf_filesystem_no_in_capsule 00:07:55.246 ************************************ 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1474986 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1474986 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1474986 ']' 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:55.246 01:55:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.246 [2024-07-14 01:55:00.911909] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:55.246 [2024-07-14 01:55:00.911996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.505 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.505 [2024-07-14 01:55:00.985963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:55.505 [2024-07-14 01:55:01.085008] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.505 [2024-07-14 01:55:01.085079] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.505 [2024-07-14 01:55:01.085097] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.505 [2024-07-14 01:55:01.085110] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.505 [2024-07-14 01:55:01.085121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.505 [2024-07-14 01:55:01.086889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.505 [2024-07-14 01:55:01.086955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.505 [2024-07-14 01:55:01.086957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.505 [2024-07-14 01:55:01.086920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.763 [2024-07-14 01:55:01.244738] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.763 Malloc1 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.763 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.764 [2024-07-14 01:55:01.430196] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:55.764 { 00:07:55.764 "name": "Malloc1", 00:07:55.764 "aliases": [ 00:07:55.764 "3d2f5673-8157-4e90-a615-81c8367bdc65" 00:07:55.764 ], 00:07:55.764 "product_name": "Malloc disk", 00:07:55.764 "block_size": 512, 00:07:55.764 "num_blocks": 1048576, 00:07:55.764 "uuid": "3d2f5673-8157-4e90-a615-81c8367bdc65", 00:07:55.764 "assigned_rate_limits": { 00:07:55.764 "rw_ios_per_sec": 0, 00:07:55.764 "rw_mbytes_per_sec": 0, 00:07:55.764 "r_mbytes_per_sec": 0, 00:07:55.764 "w_mbytes_per_sec": 0 00:07:55.764 }, 00:07:55.764 "claimed": true, 00:07:55.764 "claim_type": "exclusive_write", 00:07:55.764 "zoned": false, 00:07:55.764 "supported_io_types": { 00:07:55.764 "read": true, 00:07:55.764 "write": true, 00:07:55.764 "unmap": true, 00:07:55.764 "flush": true, 00:07:55.764 "reset": true, 00:07:55.764 "nvme_admin": false, 00:07:55.764 "nvme_io": false, 00:07:55.764 "nvme_io_md": false, 00:07:55.764 "write_zeroes": true, 00:07:55.764 "zcopy": true, 00:07:55.764 "get_zone_info": false, 00:07:55.764 "zone_management": false, 00:07:55.764 "zone_append": false, 00:07:55.764 "compare": false, 00:07:55.764 "compare_and_write": false, 00:07:55.764 "abort": true, 00:07:55.764 "seek_hole": false, 00:07:55.764 "seek_data": false, 00:07:55.764 "copy": true, 00:07:55.764 "nvme_iov_md": false 00:07:55.764 }, 00:07:55.764 "memory_domains": [ 00:07:55.764 { 00:07:55.764 "dma_device_id": "system", 00:07:55.764 "dma_device_type": 1 00:07:55.764 }, 00:07:55.764 { 00:07:55.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.764 "dma_device_type": 2 00:07:55.764 } 00:07:55.764 ], 00:07:55.764 "driver_specific": {} 00:07:55.764 } 00:07:55.764 ]' 00:07:55.764 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:56.022 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:56.022 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:56.022 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:56.022 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:56.022 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:56.022 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:56.022 01:55:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:56.587 01:55:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:56.587 01:55:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:56.587 01:55:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:56.587 01:55:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:56.588 01:55:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:59.115 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:59.373 01:55:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:00.307 01:55:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:00.307 01:55:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:00.307 01:55:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:00.307 01:55:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.307 01:55:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.565 ************************************ 00:08:00.565 START TEST filesystem_ext4 00:08:00.565 ************************************ 00:08:00.565 01:55:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:00.565 01:55:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:00.565 01:55:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:00.565 01:55:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:00.565 01:55:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:00.565 01:55:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:00.565 01:55:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:00.565 01:55:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:00.565 01:55:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:00.565 01:55:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:00.565 01:55:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:00.565 mke2fs 1.46.5 (30-Dec-2021) 00:08:00.565 Discarding device blocks: 0/522240 done 00:08:00.565 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:00.565 Filesystem UUID: 7972303d-78c2-477d-93b1-8db738b37e8f 00:08:00.565 Superblock backups stored on blocks: 00:08:00.565 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:00.565 00:08:00.565 Allocating group tables: 0/64 done 00:08:00.565 Writing inode tables: 0/64 done 00:08:03.842 Creating journal (8192 blocks): done 00:08:04.408 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:08:04.408 00:08:04.408 01:55:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:04.408 01:55:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:04.408 01:55:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:04.408 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:04.408 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:04.408 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:04.408 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:04.408 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.408 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1474986 00:08:04.408 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.408 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.408 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.408 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.408 00:08:04.408 real 0m4.080s 00:08:04.408 user 0m0.008s 00:08:04.408 sys 0m0.069s 00:08:04.408 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.408 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:04.408 ************************************ 00:08:04.408 END TEST filesystem_ext4 00:08:04.408 ************************************ 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.666 ************************************ 00:08:04.666 START TEST filesystem_btrfs 00:08:04.666 ************************************ 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:04.666 btrfs-progs v6.6.2 00:08:04.666 See https://btrfs.readthedocs.io for more information. 00:08:04.666 00:08:04.666 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:04.666 NOTE: several default settings have changed in version 5.15, please make sure 00:08:04.666 this does not affect your deployments: 00:08:04.666 - DUP for metadata (-m dup) 00:08:04.666 - enabled no-holes (-O no-holes) 00:08:04.666 - enabled free-space-tree (-R free-space-tree) 00:08:04.666 00:08:04.666 Label: (null) 00:08:04.666 UUID: a45a1f16-9517-4ab2-af31-98cf2d07cbfc 00:08:04.666 Node size: 16384 00:08:04.666 Sector size: 4096 00:08:04.666 Filesystem size: 510.00MiB 00:08:04.666 Block group profiles: 00:08:04.666 Data: single 8.00MiB 00:08:04.666 Metadata: DUP 32.00MiB 00:08:04.666 System: DUP 8.00MiB 00:08:04.666 SSD detected: yes 00:08:04.666 Zoned device: no 00:08:04.666 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:04.666 Runtime features: free-space-tree 00:08:04.666 Checksum: crc32c 00:08:04.666 Number of devices: 1 00:08:04.666 Devices: 00:08:04.666 ID SIZE PATH 00:08:04.666 1 510.00MiB /dev/nvme0n1p1 00:08:04.666 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:04.666 01:55:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1474986 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:06.040 00:08:06.040 real 0m1.260s 00:08:06.040 user 0m0.023s 00:08:06.040 sys 0m0.111s 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:06.040 ************************************ 00:08:06.040 END TEST filesystem_btrfs 00:08:06.040 ************************************ 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.040 ************************************ 00:08:06.040 START TEST filesystem_xfs 00:08:06.040 ************************************ 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:06.040 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:06.041 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:06.041 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:06.041 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:06.041 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:06.041 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:06.041 01:55:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:06.041 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:06.041 = sectsz=512 attr=2, projid32bit=1 00:08:06.041 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:06.041 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:06.041 data = bsize=4096 blocks=130560, imaxpct=25 00:08:06.041 = sunit=0 swidth=0 blks 00:08:06.041 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:06.041 log =internal log bsize=4096 blocks=16384, version=2 00:08:06.041 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:06.041 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:06.973 Discarding blocks...Done. 00:08:06.973 01:55:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:06.973 01:55:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.497 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.497 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:09.497 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.497 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:09.497 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:09.497 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.498 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1474986 00:08:09.498 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.498 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.498 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.498 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.498 00:08:09.498 real 0m3.712s 00:08:09.498 user 0m0.015s 00:08:09.498 sys 0m0.068s 00:08:09.498 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.498 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:09.498 ************************************ 00:08:09.498 END TEST filesystem_xfs 00:08:09.498 ************************************ 00:08:09.498 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:09.498 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:09.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1474986 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1474986 ']' 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1474986 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1474986 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1474986' 00:08:09.757 killing process with pid 1474986 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1474986 00:08:09.757 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1474986 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:10.324 00:08:10.324 real 0m14.923s 00:08:10.324 user 0m57.470s 00:08:10.324 sys 0m2.035s 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.324 ************************************ 00:08:10.324 END TEST nvmf_filesystem_no_in_capsule 00:08:10.324 ************************************ 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.324 ************************************ 00:08:10.324 START TEST nvmf_filesystem_in_capsule 00:08:10.324 ************************************ 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1476954 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1476954 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1476954 ']' 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:10.324 01:55:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.324 [2024-07-14 01:55:15.887723] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:10.324 [2024-07-14 01:55:15.887807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.324 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.324 [2024-07-14 01:55:15.957222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.582 [2024-07-14 01:55:16.047738] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.582 [2024-07-14 01:55:16.047798] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.582 [2024-07-14 01:55:16.047816] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.582 [2024-07-14 01:55:16.047830] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.582 [2024-07-14 01:55:16.047842] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.582 [2024-07-14 01:55:16.047918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.582 [2024-07-14 01:55:16.047978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.582 [2024-07-14 01:55:16.048097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.582 [2024-07-14 01:55:16.048100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.582 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.582 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:10.582 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:10.582 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:10.582 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.582 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.582 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:10.582 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:10.582 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.582 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.582 [2024-07-14 01:55:16.205778] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.582 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.582 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:10.582 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.582 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.891 Malloc1 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.891 [2024-07-14 01:55:16.390115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:10.891 { 00:08:10.891 "name": "Malloc1", 00:08:10.891 "aliases": [ 00:08:10.891 "31a2daac-5882-4c21-b230-708ca40fcdfa" 00:08:10.891 ], 00:08:10.891 "product_name": "Malloc disk", 00:08:10.891 "block_size": 512, 00:08:10.891 "num_blocks": 1048576, 00:08:10.891 "uuid": "31a2daac-5882-4c21-b230-708ca40fcdfa", 00:08:10.891 "assigned_rate_limits": { 00:08:10.891 "rw_ios_per_sec": 0, 00:08:10.891 "rw_mbytes_per_sec": 0, 00:08:10.891 "r_mbytes_per_sec": 0, 00:08:10.891 "w_mbytes_per_sec": 0 00:08:10.891 }, 00:08:10.891 "claimed": true, 00:08:10.891 "claim_type": "exclusive_write", 00:08:10.891 "zoned": false, 00:08:10.891 "supported_io_types": { 00:08:10.891 "read": true, 00:08:10.891 "write": true, 00:08:10.891 "unmap": true, 00:08:10.891 "flush": true, 00:08:10.891 "reset": true, 00:08:10.891 "nvme_admin": false, 00:08:10.891 "nvme_io": false, 00:08:10.891 "nvme_io_md": false, 00:08:10.891 "write_zeroes": true, 00:08:10.891 "zcopy": true, 00:08:10.891 "get_zone_info": false, 00:08:10.891 "zone_management": false, 00:08:10.891 "zone_append": false, 00:08:10.891 "compare": false, 00:08:10.891 "compare_and_write": false, 00:08:10.891 "abort": true, 00:08:10.891 "seek_hole": false, 00:08:10.891 "seek_data": false, 00:08:10.891 "copy": true, 00:08:10.891 "nvme_iov_md": false 00:08:10.891 }, 00:08:10.891 "memory_domains": [ 00:08:10.891 { 00:08:10.891 "dma_device_id": "system", 00:08:10.891 "dma_device_type": 1 00:08:10.891 }, 00:08:10.891 { 00:08:10.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.891 "dma_device_type": 2 00:08:10.891 } 00:08:10.891 ], 00:08:10.891 "driver_specific": {} 00:08:10.891 } 00:08:10.891 ]' 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:10.891 01:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:11.455 01:55:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:11.455 01:55:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:11.455 01:55:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:11.455 01:55:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:11.455 01:55:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:13.976 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:14.232 01:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:15.625 01:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:15.625 01:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:15.625 01:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:15.625 01:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.625 01:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.625 ************************************ 00:08:15.625 START TEST filesystem_in_capsule_ext4 00:08:15.625 ************************************ 00:08:15.625 01:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:15.625 01:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:15.625 01:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:15.625 01:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:15.625 01:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:15.625 01:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:15.625 01:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:15.625 01:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:15.625 01:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:15.625 01:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:15.625 01:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:15.625 mke2fs 1.46.5 (30-Dec-2021) 00:08:15.625 Discarding device blocks: 0/522240 done 00:08:15.625 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:15.625 Filesystem UUID: ff711da8-01d6-4a3a-8287-a5a49637704e 00:08:15.625 Superblock backups stored on blocks: 00:08:15.625 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:15.625 00:08:15.625 Allocating group tables: 0/64 done 00:08:15.625 Writing inode tables: 0/64 done 00:08:15.625 Creating journal (8192 blocks): done 00:08:15.625 Writing superblocks and filesystem accounting information: 0/64 done 00:08:15.625 00:08:15.625 01:55:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:15.625 01:55:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1476954 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:16.556 00:08:16.556 real 0m1.184s 00:08:16.556 user 0m0.024s 00:08:16.556 sys 0m0.048s 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:16.556 ************************************ 00:08:16.556 END TEST filesystem_in_capsule_ext4 00:08:16.556 ************************************ 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.556 ************************************ 00:08:16.556 START TEST filesystem_in_capsule_btrfs 00:08:16.556 ************************************ 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:16.556 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:16.557 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:16.815 btrfs-progs v6.6.2 00:08:16.815 See https://btrfs.readthedocs.io for more information. 00:08:16.815 00:08:16.815 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:16.815 NOTE: several default settings have changed in version 5.15, please make sure 00:08:16.815 this does not affect your deployments: 00:08:16.815 - DUP for metadata (-m dup) 00:08:16.815 - enabled no-holes (-O no-holes) 00:08:16.815 - enabled free-space-tree (-R free-space-tree) 00:08:16.815 00:08:16.815 Label: (null) 00:08:16.815 UUID: a66f1069-ba9f-4eed-a548-f5089227725e 00:08:16.815 Node size: 16384 00:08:16.815 Sector size: 4096 00:08:16.815 Filesystem size: 510.00MiB 00:08:16.815 Block group profiles: 00:08:16.815 Data: single 8.00MiB 00:08:16.815 Metadata: DUP 32.00MiB 00:08:16.815 System: DUP 8.00MiB 00:08:16.815 SSD detected: yes 00:08:16.815 Zoned device: no 00:08:16.815 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:16.815 Runtime features: free-space-tree 00:08:16.815 Checksum: crc32c 00:08:16.815 Number of devices: 1 00:08:16.815 Devices: 00:08:16.815 ID SIZE PATH 00:08:16.815 1 510.00MiB /dev/nvme0n1p1 00:08:16.815 00:08:16.815 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:16.815 01:55:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1476954 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:17.746 00:08:17.746 real 0m1.086s 00:08:17.746 user 0m0.030s 00:08:17.746 sys 0m0.101s 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:17.746 ************************************ 00:08:17.746 END TEST filesystem_in_capsule_btrfs 00:08:17.746 ************************************ 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.746 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.746 ************************************ 00:08:17.746 START TEST filesystem_in_capsule_xfs 00:08:17.746 ************************************ 00:08:17.747 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:17.747 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:17.747 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:17.747 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:17.747 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:17.747 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:17.747 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:17.747 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:17.747 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:17.747 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:17.747 01:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:17.747 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:17.747 = sectsz=512 attr=2, projid32bit=1 00:08:17.747 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:17.747 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:17.747 data = bsize=4096 blocks=130560, imaxpct=25 00:08:17.747 = sunit=0 swidth=0 blks 00:08:17.747 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:17.747 log =internal log bsize=4096 blocks=16384, version=2 00:08:17.747 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:17.747 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:19.117 Discarding blocks...Done. 00:08:19.117 01:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:19.117 01:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1476954 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.012 00:08:21.012 real 0m3.001s 00:08:21.012 user 0m0.012s 00:08:21.012 sys 0m0.064s 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:21.012 ************************************ 00:08:21.012 END TEST filesystem_in_capsule_xfs 00:08:21.012 ************************************ 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:21.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.012 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1476954 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1476954 ']' 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1476954 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1476954 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1476954' 00:08:21.013 killing process with pid 1476954 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1476954 00:08:21.013 01:55:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1476954 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:21.579 00:08:21.579 real 0m11.213s 00:08:21.579 user 0m43.032s 00:08:21.579 sys 0m1.688s 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.579 ************************************ 00:08:21.579 END TEST nvmf_filesystem_in_capsule 00:08:21.579 ************************************ 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:21.579 rmmod nvme_tcp 00:08:21.579 rmmod nvme_fabrics 00:08:21.579 rmmod nvme_keyring 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.579 01:55:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.483 01:55:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:23.483 00:08:23.483 real 0m30.663s 00:08:23.483 user 1m41.416s 00:08:23.483 sys 0m5.332s 00:08:23.483 01:55:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.483 01:55:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:23.483 ************************************ 00:08:23.483 END TEST nvmf_filesystem 00:08:23.483 ************************************ 00:08:23.741 01:55:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:23.741 01:55:29 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:23.741 01:55:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:23.741 01:55:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.741 01:55:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:23.741 ************************************ 00:08:23.741 START TEST nvmf_target_discovery 00:08:23.741 ************************************ 00:08:23.741 01:55:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:23.741 * Looking for test storage... 00:08:23.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:23.742 01:55:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.644 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:25.645 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:25.645 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:25.645 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:25.645 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.645 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.903 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.903 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.903 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.903 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.903 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:08:25.904 00:08:25.904 --- 10.0.0.2 ping statistics --- 00:08:25.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.904 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:08:25.904 00:08:25.904 --- 10.0.0.1 ping statistics --- 00:08:25.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.904 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1480357 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1480357 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1480357 ']' 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.904 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.904 [2024-07-14 01:55:31.560085] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:25.904 [2024-07-14 01:55:31.560183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.162 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.162 [2024-07-14 01:55:31.633978] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.162 [2024-07-14 01:55:31.726849] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.162 [2024-07-14 01:55:31.726948] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.162 [2024-07-14 01:55:31.726963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.162 [2024-07-14 01:55:31.726985] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.162 [2024-07-14 01:55:31.726995] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.162 [2024-07-14 01:55:31.727067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.162 [2024-07-14 01:55:31.727096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.162 [2024-07-14 01:55:31.728889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.162 [2024-07-14 01:55:31.728901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.162 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.162 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:26.162 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.162 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.162 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.420 [2024-07-14 01:55:31.876576] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.420 Null1 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.420 [2024-07-14 01:55:31.916892] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.420 Null2 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.420 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.421 Null3 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.421 Null4 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.421 01:55:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.421 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.421 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:26.421 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.421 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.421 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.421 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:26.421 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.421 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.421 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.421 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:26.421 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.421 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.421 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.421 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:26.680 00:08:26.680 Discovery Log Number of Records 6, Generation counter 6 00:08:26.680 =====Discovery Log Entry 0====== 00:08:26.680 trtype: tcp 00:08:26.680 adrfam: ipv4 00:08:26.680 subtype: current discovery subsystem 00:08:26.680 treq: not required 00:08:26.680 portid: 0 00:08:26.680 trsvcid: 4420 00:08:26.680 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:26.680 traddr: 10.0.0.2 00:08:26.680 eflags: explicit discovery connections, duplicate discovery information 00:08:26.680 sectype: none 00:08:26.680 =====Discovery Log Entry 1====== 00:08:26.680 trtype: tcp 00:08:26.680 adrfam: ipv4 00:08:26.680 subtype: nvme subsystem 00:08:26.680 treq: not required 00:08:26.680 portid: 0 00:08:26.680 trsvcid: 4420 00:08:26.680 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:26.680 traddr: 10.0.0.2 00:08:26.680 eflags: none 00:08:26.680 sectype: none 00:08:26.680 =====Discovery Log Entry 2====== 00:08:26.680 trtype: tcp 00:08:26.680 adrfam: ipv4 00:08:26.680 subtype: nvme subsystem 00:08:26.680 treq: not required 00:08:26.680 portid: 0 00:08:26.680 trsvcid: 4420 00:08:26.680 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:26.680 traddr: 10.0.0.2 00:08:26.680 eflags: none 00:08:26.680 sectype: none 00:08:26.680 =====Discovery Log Entry 3====== 00:08:26.680 trtype: tcp 00:08:26.680 adrfam: ipv4 00:08:26.680 subtype: nvme subsystem 00:08:26.680 treq: not required 00:08:26.680 portid: 0 00:08:26.680 trsvcid: 4420 00:08:26.680 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:26.680 traddr: 10.0.0.2 00:08:26.680 eflags: none 00:08:26.680 sectype: none 00:08:26.680 =====Discovery Log Entry 4====== 00:08:26.680 trtype: tcp 00:08:26.680 adrfam: ipv4 00:08:26.680 subtype: nvme subsystem 00:08:26.680 treq: not required 00:08:26.680 portid: 0 00:08:26.680 trsvcid: 4420 00:08:26.680 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:26.680 traddr: 10.0.0.2 00:08:26.680 eflags: none 00:08:26.680 sectype: none 00:08:26.680 =====Discovery Log Entry 5====== 00:08:26.680 trtype: tcp 00:08:26.680 adrfam: ipv4 00:08:26.680 subtype: discovery subsystem referral 00:08:26.680 treq: not required 00:08:26.680 portid: 0 00:08:26.680 trsvcid: 4430 00:08:26.680 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:26.680 traddr: 10.0.0.2 00:08:26.680 eflags: none 00:08:26.680 sectype: none 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:26.680 Perform nvmf subsystem discovery via RPC 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.680 [ 00:08:26.680 { 00:08:26.680 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:26.680 "subtype": "Discovery", 00:08:26.680 "listen_addresses": [ 00:08:26.680 { 00:08:26.680 "trtype": "TCP", 00:08:26.680 "adrfam": "IPv4", 00:08:26.680 "traddr": "10.0.0.2", 00:08:26.680 "trsvcid": "4420" 00:08:26.680 } 00:08:26.680 ], 00:08:26.680 "allow_any_host": true, 00:08:26.680 "hosts": [] 00:08:26.680 }, 00:08:26.680 { 00:08:26.680 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:26.680 "subtype": "NVMe", 00:08:26.680 "listen_addresses": [ 00:08:26.680 { 00:08:26.680 "trtype": "TCP", 00:08:26.680 "adrfam": "IPv4", 00:08:26.680 "traddr": "10.0.0.2", 00:08:26.680 "trsvcid": "4420" 00:08:26.680 } 00:08:26.680 ], 00:08:26.680 "allow_any_host": true, 00:08:26.680 "hosts": [], 00:08:26.680 "serial_number": "SPDK00000000000001", 00:08:26.680 "model_number": "SPDK bdev Controller", 00:08:26.680 "max_namespaces": 32, 00:08:26.680 "min_cntlid": 1, 00:08:26.680 "max_cntlid": 65519, 00:08:26.680 "namespaces": [ 00:08:26.680 { 00:08:26.680 "nsid": 1, 00:08:26.680 "bdev_name": "Null1", 00:08:26.680 "name": "Null1", 00:08:26.680 "nguid": "8E7D4DE9977F4D51B3BEDD0EECFBC3C4", 00:08:26.680 "uuid": "8e7d4de9-977f-4d51-b3be-dd0eecfbc3c4" 00:08:26.680 } 00:08:26.680 ] 00:08:26.680 }, 00:08:26.680 { 00:08:26.680 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:26.680 "subtype": "NVMe", 00:08:26.680 "listen_addresses": [ 00:08:26.680 { 00:08:26.680 "trtype": "TCP", 00:08:26.680 "adrfam": "IPv4", 00:08:26.680 "traddr": "10.0.0.2", 00:08:26.680 "trsvcid": "4420" 00:08:26.680 } 00:08:26.680 ], 00:08:26.680 "allow_any_host": true, 00:08:26.680 "hosts": [], 00:08:26.680 "serial_number": "SPDK00000000000002", 00:08:26.680 "model_number": "SPDK bdev Controller", 00:08:26.680 "max_namespaces": 32, 00:08:26.680 "min_cntlid": 1, 00:08:26.680 "max_cntlid": 65519, 00:08:26.680 "namespaces": [ 00:08:26.680 { 00:08:26.680 "nsid": 1, 00:08:26.680 "bdev_name": "Null2", 00:08:26.680 "name": "Null2", 00:08:26.680 "nguid": "39A1D19B1B63477BBC6F9BD5935A28DE", 00:08:26.680 "uuid": "39a1d19b-1b63-477b-bc6f-9bd5935a28de" 00:08:26.680 } 00:08:26.680 ] 00:08:26.680 }, 00:08:26.680 { 00:08:26.680 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:26.680 "subtype": "NVMe", 00:08:26.680 "listen_addresses": [ 00:08:26.680 { 00:08:26.680 "trtype": "TCP", 00:08:26.680 "adrfam": "IPv4", 00:08:26.680 "traddr": "10.0.0.2", 00:08:26.680 "trsvcid": "4420" 00:08:26.680 } 00:08:26.680 ], 00:08:26.680 "allow_any_host": true, 00:08:26.680 "hosts": [], 00:08:26.680 "serial_number": "SPDK00000000000003", 00:08:26.680 "model_number": "SPDK bdev Controller", 00:08:26.680 "max_namespaces": 32, 00:08:26.680 "min_cntlid": 1, 00:08:26.680 "max_cntlid": 65519, 00:08:26.680 "namespaces": [ 00:08:26.680 { 00:08:26.680 "nsid": 1, 00:08:26.680 "bdev_name": "Null3", 00:08:26.680 "name": "Null3", 00:08:26.680 "nguid": "4DF453AF99BF4568B067F22C993C1040", 00:08:26.680 "uuid": "4df453af-99bf-4568-b067-f22c993c1040" 00:08:26.680 } 00:08:26.680 ] 00:08:26.680 }, 00:08:26.680 { 00:08:26.680 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:26.680 "subtype": "NVMe", 00:08:26.680 "listen_addresses": [ 00:08:26.680 { 00:08:26.680 "trtype": "TCP", 00:08:26.680 "adrfam": "IPv4", 00:08:26.680 "traddr": "10.0.0.2", 00:08:26.680 "trsvcid": "4420" 00:08:26.680 } 00:08:26.680 ], 00:08:26.680 "allow_any_host": true, 00:08:26.680 "hosts": [], 00:08:26.680 "serial_number": "SPDK00000000000004", 00:08:26.680 "model_number": "SPDK bdev Controller", 00:08:26.680 "max_namespaces": 32, 00:08:26.680 "min_cntlid": 1, 00:08:26.680 "max_cntlid": 65519, 00:08:26.680 "namespaces": [ 00:08:26.680 { 00:08:26.680 "nsid": 1, 00:08:26.680 "bdev_name": "Null4", 00:08:26.680 "name": "Null4", 00:08:26.680 "nguid": "19097494D7194A5A85EB24EC8ED83EFD", 00:08:26.680 "uuid": "19097494-d719-4a5a-85eb-24ec8ed83efd" 00:08:26.680 } 00:08:26.680 ] 00:08:26.680 } 00:08:26.680 ] 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:26.680 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:26.681 rmmod nvme_tcp 00:08:26.681 rmmod nvme_fabrics 00:08:26.681 rmmod nvme_keyring 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1480357 ']' 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1480357 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1480357 ']' 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1480357 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:26.681 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1480357 00:08:26.940 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:26.940 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:26.940 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1480357' 00:08:26.940 killing process with pid 1480357 00:08:26.940 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1480357 00:08:26.940 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1480357 00:08:26.940 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:26.940 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:26.940 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:26.940 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:26.940 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:26.940 01:55:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.940 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.940 01:55:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.507 01:55:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.507 00:08:29.507 real 0m5.444s 00:08:29.507 user 0m4.348s 00:08:29.507 sys 0m1.839s 00:08:29.507 01:55:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.507 01:55:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:29.507 ************************************ 00:08:29.507 END TEST nvmf_target_discovery 00:08:29.507 ************************************ 00:08:29.507 01:55:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:29.507 01:55:34 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:29.507 01:55:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:29.507 01:55:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.507 01:55:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.507 ************************************ 00:08:29.507 START TEST nvmf_referrals 00:08:29.507 ************************************ 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:29.507 * Looking for test storage... 00:08:29.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:29.507 01:55:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:29.508 01:55:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:29.508 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.508 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.508 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.508 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.508 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.508 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.508 01:55:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.508 01:55:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.508 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.508 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.508 01:55:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.508 01:55:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.410 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:31.411 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:31.411 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:31.411 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:31.411 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:08:31.411 00:08:31.411 --- 10.0.0.2 ping statistics --- 00:08:31.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.411 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:08:31.411 00:08:31.411 --- 10.0.0.1 ping statistics --- 00:08:31.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.411 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.411 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.412 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.412 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.412 01:55:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:31.412 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.412 01:55:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.412 01:55:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.412 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1482409 00:08:31.412 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1482409 00:08:31.412 01:55:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.412 01:55:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1482409 ']' 00:08:31.412 01:55:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.412 01:55:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.412 01:55:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.412 01:55:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.412 01:55:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.412 [2024-07-14 01:55:36.991378] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:31.412 [2024-07-14 01:55:36.991465] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.412 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.412 [2024-07-14 01:55:37.063026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.670 [2024-07-14 01:55:37.159500] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.671 [2024-07-14 01:55:37.159560] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.671 [2024-07-14 01:55:37.159577] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.671 [2024-07-14 01:55:37.159590] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.671 [2024-07-14 01:55:37.159602] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.671 [2024-07-14 01:55:37.159710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.671 [2024-07-14 01:55:37.159763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.671 [2024-07-14 01:55:37.159816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.671 [2024-07-14 01:55:37.159819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.671 [2024-07-14 01:55:37.320966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.671 [2024-07-14 01:55:37.333237] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.671 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.928 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:31.928 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:31.928 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.928 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.928 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.928 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:31.928 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:31.928 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:31.928 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:31.929 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:31.929 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.929 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:31.929 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.929 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.929 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:31.929 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:31.929 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:31.929 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:31.929 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:31.929 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:31.929 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:31.929 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:32.186 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:32.187 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:32.187 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.187 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.187 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.187 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:32.187 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.187 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:32.444 01:55:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:32.444 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:32.444 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:32.444 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:32.444 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:32.444 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:32.444 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:32.444 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:32.702 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:32.702 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:32.702 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:32.702 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:32.702 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:32.702 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:32.960 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:33.218 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:33.218 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:33.218 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:33.218 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:33.218 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.218 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.476 01:55:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.476 rmmod nvme_tcp 00:08:33.476 rmmod nvme_fabrics 00:08:33.476 rmmod nvme_keyring 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1482409 ']' 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1482409 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1482409 ']' 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1482409 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1482409 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1482409' 00:08:33.476 killing process with pid 1482409 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1482409 00:08:33.476 01:55:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1482409 00:08:33.736 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:33.736 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:33.736 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:33.736 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:33.736 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:33.736 01:55:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.736 01:55:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.736 01:55:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.272 01:55:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:36.272 00:08:36.272 real 0m6.712s 00:08:36.272 user 0m10.177s 00:08:36.272 sys 0m2.122s 00:08:36.272 01:55:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.272 01:55:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.272 ************************************ 00:08:36.272 END TEST nvmf_referrals 00:08:36.272 ************************************ 00:08:36.272 01:55:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:36.272 01:55:41 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:36.272 01:55:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:36.272 01:55:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.272 01:55:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:36.272 ************************************ 00:08:36.272 START TEST nvmf_connect_disconnect 00:08:36.272 ************************************ 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:36.272 * Looking for test storage... 00:08:36.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.272 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:36.273 01:55:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:38.173 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:38.173 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:38.173 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:38.173 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:38.173 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:38.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:08:38.174 00:08:38.174 --- 10.0.0.2 ping statistics --- 00:08:38.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.174 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:08:38.174 00:08:38.174 --- 10.0.0.1 ping statistics --- 00:08:38.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.174 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1484700 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1484700 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1484700 ']' 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.174 01:55:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.174 [2024-07-14 01:55:43.712043] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:38.174 [2024-07-14 01:55:43.712119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.174 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.174 [2024-07-14 01:55:43.784404] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.432 [2024-07-14 01:55:43.880265] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.432 [2024-07-14 01:55:43.880321] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.432 [2024-07-14 01:55:43.880337] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.432 [2024-07-14 01:55:43.880350] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.432 [2024-07-14 01:55:43.880362] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.432 [2024-07-14 01:55:43.880426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.432 [2024-07-14 01:55:43.880481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.432 [2024-07-14 01:55:43.880545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.432 [2024-07-14 01:55:43.880548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.432 [2024-07-14 01:55:44.034795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.432 [2024-07-14 01:55:44.087738] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:38.432 01:55:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:40.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:31.403 rmmod nvme_tcp 00:12:31.403 rmmod nvme_fabrics 00:12:31.403 rmmod nvme_keyring 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1484700 ']' 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1484700 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1484700 ']' 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1484700 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1484700 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1484700' 00:12:31.403 killing process with pid 1484700 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1484700 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1484700 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.403 01:59:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.312 01:59:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:33.312 00:12:33.312 real 3m57.385s 00:12:33.312 user 15m4.384s 00:12:33.312 sys 0m34.690s 00:12:33.312 01:59:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:33.312 01:59:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.312 ************************************ 00:12:33.312 END TEST nvmf_connect_disconnect 00:12:33.312 ************************************ 00:12:33.312 01:59:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:33.312 01:59:38 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:33.312 01:59:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:33.312 01:59:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.312 01:59:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:33.312 ************************************ 00:12:33.312 START TEST nvmf_multitarget 00:12:33.312 ************************************ 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:33.312 * Looking for test storage... 00:12:33.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:33.312 01:59:38 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:33.313 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:33.313 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.313 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:33.313 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:33.313 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:33.313 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.313 01:59:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.313 01:59:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.313 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:33.313 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:33.313 01:59:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:33.313 01:59:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:35.847 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:35.847 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:35.847 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:35.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:35.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.848 01:59:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:35.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:12:35.848 00:12:35.848 --- 10.0.0.2 ping statistics --- 00:12:35.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.848 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:12:35.848 00:12:35.848 --- 10.0.0.1 ping statistics --- 00:12:35.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.848 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1515928 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1515928 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1515928 ']' 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.848 [2024-07-14 01:59:41.175392] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:35.848 [2024-07-14 01:59:41.175480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.848 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.848 [2024-07-14 01:59:41.250285] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.848 [2024-07-14 01:59:41.342288] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.848 [2024-07-14 01:59:41.342350] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.848 [2024-07-14 01:59:41.342366] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.848 [2024-07-14 01:59:41.342379] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.848 [2024-07-14 01:59:41.342390] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.848 [2024-07-14 01:59:41.342477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.848 [2024-07-14 01:59:41.342558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.848 [2024-07-14 01:59:41.342637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.848 [2024-07-14 01:59:41.342640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:35.848 01:59:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:36.106 01:59:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:36.106 01:59:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:36.106 "nvmf_tgt_1" 00:12:36.106 01:59:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:36.364 "nvmf_tgt_2" 00:12:36.364 01:59:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:36.364 01:59:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:36.364 01:59:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:36.364 01:59:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:36.622 true 00:12:36.622 01:59:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:36.622 true 00:12:36.622 01:59:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:36.622 01:59:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:36.622 01:59:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:36.622 01:59:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:36.622 01:59:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:36.622 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:36.622 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:36.622 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:36.622 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:36.622 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:36.622 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:36.622 rmmod nvme_tcp 00:12:36.622 rmmod nvme_fabrics 00:12:36.880 rmmod nvme_keyring 00:12:36.880 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:36.880 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:36.880 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:36.880 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1515928 ']' 00:12:36.880 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1515928 00:12:36.880 01:59:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1515928 ']' 00:12:36.880 01:59:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1515928 00:12:36.880 01:59:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:36.880 01:59:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:36.880 01:59:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1515928 00:12:36.880 01:59:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:36.880 01:59:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:36.880 01:59:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1515928' 00:12:36.880 killing process with pid 1515928 00:12:36.880 01:59:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1515928 00:12:36.880 01:59:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1515928 00:12:37.140 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:37.140 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:37.140 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:37.140 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:37.140 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:37.140 01:59:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.140 01:59:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.140 01:59:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.048 01:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:39.048 00:12:39.048 real 0m5.747s 00:12:39.048 user 0m6.471s 00:12:39.048 sys 0m1.913s 00:12:39.048 01:59:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.048 01:59:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:39.048 ************************************ 00:12:39.048 END TEST nvmf_multitarget 00:12:39.048 ************************************ 00:12:39.048 01:59:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:39.048 01:59:44 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:39.048 01:59:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:39.048 01:59:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.048 01:59:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:39.048 ************************************ 00:12:39.048 START TEST nvmf_rpc 00:12:39.048 ************************************ 00:12:39.048 01:59:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:39.307 * Looking for test storage... 00:12:39.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:39.307 01:59:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:41.212 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:41.212 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:41.212 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.212 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:41.213 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.213 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.471 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:41.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:12:41.471 00:12:41.471 --- 10.0.0.2 ping statistics --- 00:12:41.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.471 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:12:41.471 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:12:41.471 00:12:41.471 --- 10.0.0.1 ping statistics --- 00:12:41.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.471 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:12:41.471 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.471 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1518141 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1518141 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1518141 ']' 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.472 01:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.472 [2024-07-14 01:59:46.997304] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:41.472 [2024-07-14 01:59:46.997393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.472 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.472 [2024-07-14 01:59:47.070065] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.730 [2024-07-14 01:59:47.164418] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.730 [2024-07-14 01:59:47.164475] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.730 [2024-07-14 01:59:47.164501] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.730 [2024-07-14 01:59:47.164515] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.730 [2024-07-14 01:59:47.164527] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.730 [2024-07-14 01:59:47.164617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.730 [2024-07-14 01:59:47.164672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.730 [2024-07-14 01:59:47.164734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.730 [2024-07-14 01:59:47.164737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.730 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:41.730 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:41.730 01:59:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:41.730 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:41.730 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.730 01:59:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.730 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:41.730 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.730 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.730 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.730 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:41.730 "tick_rate": 2700000000, 00:12:41.730 "poll_groups": [ 00:12:41.730 { 00:12:41.730 "name": "nvmf_tgt_poll_group_000", 00:12:41.730 "admin_qpairs": 0, 00:12:41.730 "io_qpairs": 0, 00:12:41.730 "current_admin_qpairs": 0, 00:12:41.730 "current_io_qpairs": 0, 00:12:41.730 "pending_bdev_io": 0, 00:12:41.730 "completed_nvme_io": 0, 00:12:41.730 "transports": [] 00:12:41.730 }, 00:12:41.730 { 00:12:41.730 "name": "nvmf_tgt_poll_group_001", 00:12:41.730 "admin_qpairs": 0, 00:12:41.730 "io_qpairs": 0, 00:12:41.730 "current_admin_qpairs": 0, 00:12:41.730 "current_io_qpairs": 0, 00:12:41.730 "pending_bdev_io": 0, 00:12:41.730 "completed_nvme_io": 0, 00:12:41.730 "transports": [] 00:12:41.730 }, 00:12:41.730 { 00:12:41.730 "name": "nvmf_tgt_poll_group_002", 00:12:41.730 "admin_qpairs": 0, 00:12:41.730 "io_qpairs": 0, 00:12:41.730 "current_admin_qpairs": 0, 00:12:41.730 "current_io_qpairs": 0, 00:12:41.730 "pending_bdev_io": 0, 00:12:41.730 "completed_nvme_io": 0, 00:12:41.730 "transports": [] 00:12:41.730 }, 00:12:41.730 { 00:12:41.730 "name": "nvmf_tgt_poll_group_003", 00:12:41.730 "admin_qpairs": 0, 00:12:41.730 "io_qpairs": 0, 00:12:41.730 "current_admin_qpairs": 0, 00:12:41.730 "current_io_qpairs": 0, 00:12:41.730 "pending_bdev_io": 0, 00:12:41.730 "completed_nvme_io": 0, 00:12:41.730 "transports": [] 00:12:41.730 } 00:12:41.730 ] 00:12:41.730 }' 00:12:41.731 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:41.731 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:41.731 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:41.731 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:41.731 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:41.731 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:41.989 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:41.989 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.989 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.989 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.989 [2024-07-14 01:59:47.443302] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.989 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.989 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:41.989 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.989 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.989 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.989 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:41.989 "tick_rate": 2700000000, 00:12:41.989 "poll_groups": [ 00:12:41.989 { 00:12:41.989 "name": "nvmf_tgt_poll_group_000", 00:12:41.989 "admin_qpairs": 0, 00:12:41.989 "io_qpairs": 0, 00:12:41.989 "current_admin_qpairs": 0, 00:12:41.989 "current_io_qpairs": 0, 00:12:41.989 "pending_bdev_io": 0, 00:12:41.989 "completed_nvme_io": 0, 00:12:41.989 "transports": [ 00:12:41.989 { 00:12:41.989 "trtype": "TCP" 00:12:41.989 } 00:12:41.989 ] 00:12:41.989 }, 00:12:41.989 { 00:12:41.989 "name": "nvmf_tgt_poll_group_001", 00:12:41.989 "admin_qpairs": 0, 00:12:41.989 "io_qpairs": 0, 00:12:41.989 "current_admin_qpairs": 0, 00:12:41.989 "current_io_qpairs": 0, 00:12:41.989 "pending_bdev_io": 0, 00:12:41.989 "completed_nvme_io": 0, 00:12:41.989 "transports": [ 00:12:41.989 { 00:12:41.989 "trtype": "TCP" 00:12:41.989 } 00:12:41.989 ] 00:12:41.989 }, 00:12:41.989 { 00:12:41.989 "name": "nvmf_tgt_poll_group_002", 00:12:41.989 "admin_qpairs": 0, 00:12:41.989 "io_qpairs": 0, 00:12:41.989 "current_admin_qpairs": 0, 00:12:41.989 "current_io_qpairs": 0, 00:12:41.989 "pending_bdev_io": 0, 00:12:41.989 "completed_nvme_io": 0, 00:12:41.989 "transports": [ 00:12:41.989 { 00:12:41.989 "trtype": "TCP" 00:12:41.989 } 00:12:41.989 ] 00:12:41.989 }, 00:12:41.989 { 00:12:41.989 "name": "nvmf_tgt_poll_group_003", 00:12:41.989 "admin_qpairs": 0, 00:12:41.989 "io_qpairs": 0, 00:12:41.989 "current_admin_qpairs": 0, 00:12:41.989 "current_io_qpairs": 0, 00:12:41.989 "pending_bdev_io": 0, 00:12:41.989 "completed_nvme_io": 0, 00:12:41.989 "transports": [ 00:12:41.989 { 00:12:41.989 "trtype": "TCP" 00:12:41.989 } 00:12:41.989 ] 00:12:41.989 } 00:12:41.989 ] 00:12:41.989 }' 00:12:41.989 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.990 Malloc1 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.990 [2024-07-14 01:59:47.586395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:41.990 [2024-07-14 01:59:47.608923] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:41.990 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:41.990 could not add new controller: failed to write to nvme-fabrics device 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.990 01:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.923 01:59:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.923 01:59:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:42.923 01:59:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.923 01:59:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:42.923 01:59:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.821 [2024-07-14 01:59:50.457787] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:44.821 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:44.821 could not add new controller: failed to write to nvme-fabrics device 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.821 01:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.752 01:59:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.752 01:59:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:45.752 01:59:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.752 01:59:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:45.752 01:59:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.646 [2024-07-14 01:59:53.229729] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.646 01:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.261 01:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.261 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:48.261 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.261 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:48.261 01:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.786 01:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.786 [2024-07-14 01:59:56.023548] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.786 01:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.043 01:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.043 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:51.043 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.043 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:51.043 01:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.570 [2024-07-14 01:59:58.816765] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.570 01:59:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.829 01:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.829 01:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:53.829 01:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.829 01:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:53.829 01:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.357 [2024-07-14 02:00:01.596724] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.357 02:00:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.613 02:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.613 02:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:56.613 02:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.613 02:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:56.613 02:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.136 [2024-07-14 02:00:04.405634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.136 02:00:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.699 02:00:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.699 02:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:59.699 02:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.699 02:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:59.699 02:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.593 [2024-07-14 02:00:07.217013] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.593 [2024-07-14 02:00:07.265071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.593 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.850 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.850 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.850 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.850 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.850 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.850 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.850 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.850 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.850 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.850 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:01.850 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.850 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.850 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.850 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 [2024-07-14 02:00:07.313269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 [2024-07-14 02:00:07.361402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 [2024-07-14 02:00:07.409574] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:01.851 "tick_rate": 2700000000, 00:13:01.851 "poll_groups": [ 00:13:01.851 { 00:13:01.851 "name": "nvmf_tgt_poll_group_000", 00:13:01.851 "admin_qpairs": 2, 00:13:01.851 "io_qpairs": 84, 00:13:01.851 "current_admin_qpairs": 0, 00:13:01.851 "current_io_qpairs": 0, 00:13:01.851 "pending_bdev_io": 0, 00:13:01.851 "completed_nvme_io": 179, 00:13:01.851 "transports": [ 00:13:01.851 { 00:13:01.851 "trtype": "TCP" 00:13:01.851 } 00:13:01.851 ] 00:13:01.851 }, 00:13:01.851 { 00:13:01.851 "name": "nvmf_tgt_poll_group_001", 00:13:01.851 "admin_qpairs": 2, 00:13:01.851 "io_qpairs": 84, 00:13:01.851 "current_admin_qpairs": 0, 00:13:01.851 "current_io_qpairs": 0, 00:13:01.851 "pending_bdev_io": 0, 00:13:01.851 "completed_nvme_io": 199, 00:13:01.851 "transports": [ 00:13:01.851 { 00:13:01.851 "trtype": "TCP" 00:13:01.851 } 00:13:01.851 ] 00:13:01.851 }, 00:13:01.851 { 00:13:01.851 "name": "nvmf_tgt_poll_group_002", 00:13:01.851 "admin_qpairs": 1, 00:13:01.851 "io_qpairs": 84, 00:13:01.851 "current_admin_qpairs": 0, 00:13:01.851 "current_io_qpairs": 0, 00:13:01.851 "pending_bdev_io": 0, 00:13:01.851 "completed_nvme_io": 172, 00:13:01.851 "transports": [ 00:13:01.851 { 00:13:01.851 "trtype": "TCP" 00:13:01.851 } 00:13:01.851 ] 00:13:01.851 }, 00:13:01.851 { 00:13:01.851 "name": "nvmf_tgt_poll_group_003", 00:13:01.851 "admin_qpairs": 2, 00:13:01.851 "io_qpairs": 84, 00:13:01.851 "current_admin_qpairs": 0, 00:13:01.851 "current_io_qpairs": 0, 00:13:01.851 "pending_bdev_io": 0, 00:13:01.851 "completed_nvme_io": 136, 00:13:01.851 "transports": [ 00:13:01.851 { 00:13:01.851 "trtype": "TCP" 00:13:01.851 } 00:13:01.851 ] 00:13:01.851 } 00:13:01.851 ] 00:13:01.851 }' 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:01.851 02:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:01.852 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:01.852 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:01.852 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:01.852 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:01.852 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:01.852 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:02.109 rmmod nvme_tcp 00:13:02.109 rmmod nvme_fabrics 00:13:02.109 rmmod nvme_keyring 00:13:02.109 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:02.109 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:02.109 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:02.109 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1518141 ']' 00:13:02.109 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1518141 00:13:02.109 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1518141 ']' 00:13:02.109 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1518141 00:13:02.109 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:13:02.109 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:02.109 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1518141 00:13:02.109 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:02.109 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:02.109 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1518141' 00:13:02.109 killing process with pid 1518141 00:13:02.109 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1518141 00:13:02.109 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1518141 00:13:02.367 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:02.367 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:02.367 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:02.367 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:02.367 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:02.367 02:00:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.368 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.368 02:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.269 02:00:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:04.269 00:13:04.269 real 0m25.230s 00:13:04.269 user 1m21.907s 00:13:04.269 sys 0m4.093s 00:13:04.269 02:00:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:04.269 02:00:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.269 ************************************ 00:13:04.269 END TEST nvmf_rpc 00:13:04.269 ************************************ 00:13:04.269 02:00:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:04.269 02:00:09 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:04.269 02:00:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:04.269 02:00:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:04.269 02:00:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:04.527 ************************************ 00:13:04.527 START TEST nvmf_invalid 00:13:04.527 ************************************ 00:13:04.527 02:00:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:04.527 * Looking for test storage... 00:13:04.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:04.527 02:00:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:07.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:07.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:07.065 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:07.066 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:07.066 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:07.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:13:07.066 00:13:07.066 --- 10.0.0.2 ping statistics --- 00:13:07.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.066 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:07.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:13:07.066 00:13:07.066 --- 10.0.0.1 ping statistics --- 00:13:07.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.066 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1523248 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1523248 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1523248 ']' 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:07.066 [2024-07-14 02:00:12.368466] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:07.066 [2024-07-14 02:00:12.368557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.066 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.066 [2024-07-14 02:00:12.434168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.066 [2024-07-14 02:00:12.525183] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.066 [2024-07-14 02:00:12.525231] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.066 [2024-07-14 02:00:12.525253] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.066 [2024-07-14 02:00:12.525264] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.066 [2024-07-14 02:00:12.525274] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.066 [2024-07-14 02:00:12.525323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.066 [2024-07-14 02:00:12.525385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.066 [2024-07-14 02:00:12.525458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.066 [2024-07-14 02:00:12.525460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:07.066 02:00:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1231 00:13:07.376 [2024-07-14 02:00:12.879177] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:07.376 02:00:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:07.376 { 00:13:07.376 "nqn": "nqn.2016-06.io.spdk:cnode1231", 00:13:07.376 "tgt_name": "foobar", 00:13:07.376 "method": "nvmf_create_subsystem", 00:13:07.376 "req_id": 1 00:13:07.376 } 00:13:07.376 Got JSON-RPC error response 00:13:07.376 response: 00:13:07.376 { 00:13:07.376 "code": -32603, 00:13:07.376 "message": "Unable to find target foobar" 00:13:07.376 }' 00:13:07.376 02:00:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:07.376 { 00:13:07.376 "nqn": "nqn.2016-06.io.spdk:cnode1231", 00:13:07.376 "tgt_name": "foobar", 00:13:07.376 "method": "nvmf_create_subsystem", 00:13:07.376 "req_id": 1 00:13:07.376 } 00:13:07.376 Got JSON-RPC error response 00:13:07.376 response: 00:13:07.376 { 00:13:07.376 "code": -32603, 00:13:07.376 "message": "Unable to find target foobar" 00:13:07.376 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:07.376 02:00:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:07.376 02:00:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5629 00:13:07.633 [2024-07-14 02:00:13.144094] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5629: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:07.633 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:07.633 { 00:13:07.633 "nqn": "nqn.2016-06.io.spdk:cnode5629", 00:13:07.633 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:07.633 "method": "nvmf_create_subsystem", 00:13:07.633 "req_id": 1 00:13:07.633 } 00:13:07.633 Got JSON-RPC error response 00:13:07.633 response: 00:13:07.633 { 00:13:07.633 "code": -32602, 00:13:07.633 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:07.633 }' 00:13:07.633 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:07.633 { 00:13:07.633 "nqn": "nqn.2016-06.io.spdk:cnode5629", 00:13:07.633 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:07.633 "method": "nvmf_create_subsystem", 00:13:07.633 "req_id": 1 00:13:07.633 } 00:13:07.633 Got JSON-RPC error response 00:13:07.633 response: 00:13:07.633 { 00:13:07.633 "code": -32602, 00:13:07.633 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:07.633 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:07.633 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:07.633 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode22872 00:13:07.890 [2024-07-14 02:00:13.441105] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22872: invalid model number 'SPDK_Controller' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:07.890 { 00:13:07.890 "nqn": "nqn.2016-06.io.spdk:cnode22872", 00:13:07.890 "model_number": "SPDK_Controller\u001f", 00:13:07.890 "method": "nvmf_create_subsystem", 00:13:07.890 "req_id": 1 00:13:07.890 } 00:13:07.890 Got JSON-RPC error response 00:13:07.890 response: 00:13:07.890 { 00:13:07.890 "code": -32602, 00:13:07.890 "message": "Invalid MN SPDK_Controller\u001f" 00:13:07.890 }' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:07.890 { 00:13:07.890 "nqn": "nqn.2016-06.io.spdk:cnode22872", 00:13:07.890 "model_number": "SPDK_Controller\u001f", 00:13:07.890 "method": "nvmf_create_subsystem", 00:13:07.890 "req_id": 1 00:13:07.890 } 00:13:07.890 Got JSON-RPC error response 00:13:07.890 response: 00:13:07.890 { 00:13:07.890 "code": -32602, 00:13:07.890 "message": "Invalid MN SPDK_Controller\u001f" 00:13:07.890 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:07.890 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ R == \- ]] 00:13:07.891 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'RxE(pp+[YPG' 00:13:08.408 02:00:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '\hz,31-2Ab#D|PPcRm2h1l1LVzLeqbO8~wC&Q>YPG' nqn.2016-06.io.spdk:cnode16674 00:13:08.665 [2024-07-14 02:00:14.131510] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16674: invalid model number '\hz,31-2Ab#D|PPcRm2h1l1LVzLeqbO8~wC&Q>YPG' 00:13:08.665 02:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:08.665 { 00:13:08.665 "nqn": "nqn.2016-06.io.spdk:cnode16674", 00:13:08.665 "model_number": "\\hz,31-2Ab#D|PPcRm2h1l1LVzLeqbO8~wC&Q>YPG", 00:13:08.665 "method": "nvmf_create_subsystem", 00:13:08.665 "req_id": 1 00:13:08.665 } 00:13:08.665 Got JSON-RPC error response 00:13:08.665 response: 00:13:08.665 { 00:13:08.665 "code": -32602, 00:13:08.665 "message": "Invalid MN \\hz,31-2Ab#D|PPcRm2h1l1LVzLeqbO8~wC&Q>YPG" 00:13:08.665 }' 00:13:08.665 02:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:08.665 { 00:13:08.665 "nqn": "nqn.2016-06.io.spdk:cnode16674", 00:13:08.665 "model_number": "\\hz,31-2Ab#D|PPcRm2h1l1LVzLeqbO8~wC&Q>YPG", 00:13:08.665 "method": "nvmf_create_subsystem", 00:13:08.665 "req_id": 1 00:13:08.665 } 00:13:08.665 Got JSON-RPC error response 00:13:08.665 response: 00:13:08.665 { 00:13:08.665 "code": -32602, 00:13:08.665 "message": "Invalid MN \\hz,31-2Ab#D|PPcRm2h1l1LVzLeqbO8~wC&Q>YPG" 00:13:08.665 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:08.665 02:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:08.928 [2024-07-14 02:00:14.376404] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.928 02:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:09.186 02:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:09.186 02:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:09.186 02:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:09.186 02:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:09.186 02:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:09.187 [2024-07-14 02:00:14.878084] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:09.444 02:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:09.444 { 00:13:09.444 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:09.444 "listen_address": { 00:13:09.444 "trtype": "tcp", 00:13:09.444 "traddr": "", 00:13:09.444 "trsvcid": "4421" 00:13:09.444 }, 00:13:09.444 "method": "nvmf_subsystem_remove_listener", 00:13:09.444 "req_id": 1 00:13:09.444 } 00:13:09.444 Got JSON-RPC error response 00:13:09.444 response: 00:13:09.444 { 00:13:09.444 "code": -32602, 00:13:09.444 "message": "Invalid parameters" 00:13:09.444 }' 00:13:09.444 02:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:09.444 { 00:13:09.444 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:09.444 "listen_address": { 00:13:09.444 "trtype": "tcp", 00:13:09.444 "traddr": "", 00:13:09.444 "trsvcid": "4421" 00:13:09.444 }, 00:13:09.444 "method": "nvmf_subsystem_remove_listener", 00:13:09.444 "req_id": 1 00:13:09.445 } 00:13:09.445 Got JSON-RPC error response 00:13:09.445 response: 00:13:09.445 { 00:13:09.445 "code": -32602, 00:13:09.445 "message": "Invalid parameters" 00:13:09.445 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:09.445 02:00:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31456 -i 0 00:13:09.445 [2024-07-14 02:00:15.126863] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31456: invalid cntlid range [0-65519] 00:13:09.703 02:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:09.703 { 00:13:09.703 "nqn": "nqn.2016-06.io.spdk:cnode31456", 00:13:09.703 "min_cntlid": 0, 00:13:09.703 "method": "nvmf_create_subsystem", 00:13:09.703 "req_id": 1 00:13:09.703 } 00:13:09.703 Got JSON-RPC error response 00:13:09.703 response: 00:13:09.703 { 00:13:09.703 "code": -32602, 00:13:09.703 "message": "Invalid cntlid range [0-65519]" 00:13:09.703 }' 00:13:09.703 02:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:09.703 { 00:13:09.703 "nqn": "nqn.2016-06.io.spdk:cnode31456", 00:13:09.703 "min_cntlid": 0, 00:13:09.703 "method": "nvmf_create_subsystem", 00:13:09.703 "req_id": 1 00:13:09.703 } 00:13:09.703 Got JSON-RPC error response 00:13:09.703 response: 00:13:09.703 { 00:13:09.703 "code": -32602, 00:13:09.703 "message": "Invalid cntlid range [0-65519]" 00:13:09.703 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:09.703 02:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode564 -i 65520 00:13:09.703 [2024-07-14 02:00:15.383706] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode564: invalid cntlid range [65520-65519] 00:13:09.962 02:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:09.962 { 00:13:09.962 "nqn": "nqn.2016-06.io.spdk:cnode564", 00:13:09.962 "min_cntlid": 65520, 00:13:09.962 "method": "nvmf_create_subsystem", 00:13:09.962 "req_id": 1 00:13:09.962 } 00:13:09.962 Got JSON-RPC error response 00:13:09.962 response: 00:13:09.962 { 00:13:09.962 "code": -32602, 00:13:09.962 "message": "Invalid cntlid range [65520-65519]" 00:13:09.962 }' 00:13:09.962 02:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:09.962 { 00:13:09.962 "nqn": "nqn.2016-06.io.spdk:cnode564", 00:13:09.962 "min_cntlid": 65520, 00:13:09.962 "method": "nvmf_create_subsystem", 00:13:09.962 "req_id": 1 00:13:09.962 } 00:13:09.962 Got JSON-RPC error response 00:13:09.962 response: 00:13:09.962 { 00:13:09.962 "code": -32602, 00:13:09.962 "message": "Invalid cntlid range [65520-65519]" 00:13:09.962 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:09.962 02:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16048 -I 0 00:13:09.962 [2024-07-14 02:00:15.628553] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16048: invalid cntlid range [1-0] 00:13:09.962 02:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:09.962 { 00:13:09.962 "nqn": "nqn.2016-06.io.spdk:cnode16048", 00:13:09.962 "max_cntlid": 0, 00:13:09.962 "method": "nvmf_create_subsystem", 00:13:09.962 "req_id": 1 00:13:09.962 } 00:13:09.962 Got JSON-RPC error response 00:13:09.962 response: 00:13:09.962 { 00:13:09.962 "code": -32602, 00:13:09.962 "message": "Invalid cntlid range [1-0]" 00:13:09.962 }' 00:13:09.962 02:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:09.962 { 00:13:09.962 "nqn": "nqn.2016-06.io.spdk:cnode16048", 00:13:09.962 "max_cntlid": 0, 00:13:09.962 "method": "nvmf_create_subsystem", 00:13:09.962 "req_id": 1 00:13:09.962 } 00:13:09.962 Got JSON-RPC error response 00:13:09.962 response: 00:13:09.962 { 00:13:09.962 "code": -32602, 00:13:09.962 "message": "Invalid cntlid range [1-0]" 00:13:09.962 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:09.962 02:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26429 -I 65520 00:13:10.220 [2024-07-14 02:00:15.881425] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26429: invalid cntlid range [1-65520] 00:13:10.220 02:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:10.220 { 00:13:10.220 "nqn": "nqn.2016-06.io.spdk:cnode26429", 00:13:10.220 "max_cntlid": 65520, 00:13:10.220 "method": "nvmf_create_subsystem", 00:13:10.220 "req_id": 1 00:13:10.220 } 00:13:10.220 Got JSON-RPC error response 00:13:10.220 response: 00:13:10.220 { 00:13:10.220 "code": -32602, 00:13:10.220 "message": "Invalid cntlid range [1-65520]" 00:13:10.220 }' 00:13:10.220 02:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:10.220 { 00:13:10.220 "nqn": "nqn.2016-06.io.spdk:cnode26429", 00:13:10.220 "max_cntlid": 65520, 00:13:10.220 "method": "nvmf_create_subsystem", 00:13:10.220 "req_id": 1 00:13:10.220 } 00:13:10.220 Got JSON-RPC error response 00:13:10.220 response: 00:13:10.220 { 00:13:10.220 "code": -32602, 00:13:10.220 "message": "Invalid cntlid range [1-65520]" 00:13:10.220 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.220 02:00:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2483 -i 6 -I 5 00:13:10.478 [2024-07-14 02:00:16.126248] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2483: invalid cntlid range [6-5] 00:13:10.478 02:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:10.478 { 00:13:10.478 "nqn": "nqn.2016-06.io.spdk:cnode2483", 00:13:10.478 "min_cntlid": 6, 00:13:10.478 "max_cntlid": 5, 00:13:10.478 "method": "nvmf_create_subsystem", 00:13:10.478 "req_id": 1 00:13:10.478 } 00:13:10.478 Got JSON-RPC error response 00:13:10.478 response: 00:13:10.478 { 00:13:10.478 "code": -32602, 00:13:10.478 "message": "Invalid cntlid range [6-5]" 00:13:10.478 }' 00:13:10.478 02:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:10.478 { 00:13:10.478 "nqn": "nqn.2016-06.io.spdk:cnode2483", 00:13:10.478 "min_cntlid": 6, 00:13:10.478 "max_cntlid": 5, 00:13:10.478 "method": "nvmf_create_subsystem", 00:13:10.478 "req_id": 1 00:13:10.478 } 00:13:10.478 Got JSON-RPC error response 00:13:10.478 response: 00:13:10.478 { 00:13:10.478 "code": -32602, 00:13:10.478 "message": "Invalid cntlid range [6-5]" 00:13:10.478 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.478 02:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:10.736 { 00:13:10.736 "name": "foobar", 00:13:10.736 "method": "nvmf_delete_target", 00:13:10.736 "req_id": 1 00:13:10.736 } 00:13:10.736 Got JSON-RPC error response 00:13:10.736 response: 00:13:10.736 { 00:13:10.736 "code": -32602, 00:13:10.736 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:10.736 }' 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:10.736 { 00:13:10.736 "name": "foobar", 00:13:10.736 "method": "nvmf_delete_target", 00:13:10.736 "req_id": 1 00:13:10.736 } 00:13:10.736 Got JSON-RPC error response 00:13:10.736 response: 00:13:10.736 { 00:13:10.736 "code": -32602, 00:13:10.736 "message": "The specified target doesn't exist, cannot delete it." 00:13:10.736 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:10.736 rmmod nvme_tcp 00:13:10.736 rmmod nvme_fabrics 00:13:10.736 rmmod nvme_keyring 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1523248 ']' 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1523248 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1523248 ']' 00:13:10.736 02:00:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1523248 00:13:10.737 02:00:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:13:10.737 02:00:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:10.737 02:00:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1523248 00:13:10.737 02:00:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:10.737 02:00:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:10.737 02:00:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1523248' 00:13:10.737 killing process with pid 1523248 00:13:10.737 02:00:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1523248 00:13:10.737 02:00:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1523248 00:13:10.994 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:10.994 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:10.994 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:10.994 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:10.994 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:10.994 02:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.994 02:00:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.994 02:00:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.900 02:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:13.158 00:13:13.158 real 0m8.619s 00:13:13.158 user 0m19.776s 00:13:13.158 sys 0m2.430s 00:13:13.158 02:00:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:13.158 02:00:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:13.158 ************************************ 00:13:13.158 END TEST nvmf_invalid 00:13:13.158 ************************************ 00:13:13.158 02:00:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:13.158 02:00:18 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:13.158 02:00:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:13.158 02:00:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.159 02:00:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:13.159 ************************************ 00:13:13.159 START TEST nvmf_abort 00:13:13.159 ************************************ 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:13.159 * Looking for test storage... 00:13:13.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:13.159 02:00:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:15.063 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:15.063 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:15.063 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:15.063 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.063 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:15.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:13:15.322 00:13:15.322 --- 10.0.0.2 ping statistics --- 00:13:15.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.322 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:13:15.322 00:13:15.322 --- 10.0.0.1 ping statistics --- 00:13:15.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.322 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1525832 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1525832 00:13:15.322 02:00:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1525832 ']' 00:13:15.323 02:00:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.323 02:00:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.323 02:00:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.323 02:00:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.323 02:00:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.323 [2024-07-14 02:00:20.956485] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:15.323 [2024-07-14 02:00:20.956579] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.323 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.581 [2024-07-14 02:00:21.031772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:15.581 [2024-07-14 02:00:21.125015] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.581 [2024-07-14 02:00:21.125080] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.581 [2024-07-14 02:00:21.125106] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.581 [2024-07-14 02:00:21.125120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.581 [2024-07-14 02:00:21.125132] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.581 [2024-07-14 02:00:21.125220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.581 [2024-07-14 02:00:21.125282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.581 [2024-07-14 02:00:21.125279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.581 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:15.581 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:13:15.582 02:00:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:15.582 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:15.582 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.582 02:00:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.840 [2024-07-14 02:00:21.277060] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.840 Malloc0 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.840 Delay0 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.840 [2024-07-14 02:00:21.343000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.840 02:00:21 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:15.840 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.840 [2024-07-14 02:00:21.450133] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:18.377 Initializing NVMe Controllers 00:13:18.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:18.377 controller IO queue size 128 less than required 00:13:18.377 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:18.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:18.377 Initialization complete. Launching workers. 00:13:18.377 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 32199 00:13:18.377 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32263, failed to submit 62 00:13:18.377 success 32203, unsuccess 60, failed 0 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:18.377 rmmod nvme_tcp 00:13:18.377 rmmod nvme_fabrics 00:13:18.377 rmmod nvme_keyring 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1525832 ']' 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1525832 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1525832 ']' 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1525832 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1525832 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1525832' 00:13:18.377 killing process with pid 1525832 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1525832 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1525832 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.377 02:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.912 02:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:20.912 00:13:20.912 real 0m7.393s 00:13:20.912 user 0m10.944s 00:13:20.912 sys 0m2.581s 00:13:20.912 02:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:20.912 02:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:20.912 ************************************ 00:13:20.912 END TEST nvmf_abort 00:13:20.912 ************************************ 00:13:20.912 02:00:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:20.912 02:00:26 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:20.912 02:00:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:20.913 02:00:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:20.913 02:00:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:20.913 ************************************ 00:13:20.913 START TEST nvmf_ns_hotplug_stress 00:13:20.913 ************************************ 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:20.913 * Looking for test storage... 00:13:20.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:20.913 02:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:22.815 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:22.815 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:22.815 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.815 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:22.816 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:22.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:13:22.816 00:13:22.816 --- 10.0.0.2 ping statistics --- 00:13:22.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.816 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:13:22.816 00:13:22.816 --- 10.0.0.1 ping statistics --- 00:13:22.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.816 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1528104 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1528104 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1528104 ']' 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.816 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.816 [2024-07-14 02:00:28.408754] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:22.816 [2024-07-14 02:00:28.408857] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.816 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.816 [2024-07-14 02:00:28.481034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:23.074 [2024-07-14 02:00:28.575516] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.074 [2024-07-14 02:00:28.575583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.074 [2024-07-14 02:00:28.575610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.074 [2024-07-14 02:00:28.575624] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.074 [2024-07-14 02:00:28.575636] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.074 [2024-07-14 02:00:28.575722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.074 [2024-07-14 02:00:28.575776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.074 [2024-07-14 02:00:28.575779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.074 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.074 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:13:23.074 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:23.074 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:23.074 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.074 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.074 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:23.074 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:23.378 [2024-07-14 02:00:28.963590] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.378 02:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:23.635 02:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.891 [2024-07-14 02:00:29.478483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.891 02:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:24.148 02:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:24.404 Malloc0 00:13:24.404 02:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:24.662 Delay0 00:13:24.662 02:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.228 02:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:25.228 NULL1 00:13:25.228 02:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:25.486 02:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1528528 00:13:25.486 02:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:25.486 02:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:25.486 02:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.745 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.681 Read completed with error (sct=0, sc=11) 00:13:26.681 02:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.197 02:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:27.197 02:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:27.455 true 00:13:27.455 02:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:27.455 02:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.020 02:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.278 02:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:28.278 02:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:28.536 true 00:13:28.796 02:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:28.796 02:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.054 02:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.054 02:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:29.054 02:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:29.313 true 00:13:29.313 02:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:29.313 02:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.570 02:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.828 02:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:29.828 02:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:30.086 true 00:13:30.086 02:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:30.086 02:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.463 02:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.463 02:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:31.463 02:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:31.721 true 00:13:31.721 02:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:31.721 02:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.657 02:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.657 02:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:32.657 02:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:32.915 true 00:13:32.915 02:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:32.915 02:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.172 02:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.432 02:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:33.432 02:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:33.690 true 00:13:33.690 02:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:33.690 02:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.625 02:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.625 02:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:34.625 02:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:34.883 true 00:13:34.883 02:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:34.883 02:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.140 02:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.399 02:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:35.399 02:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:35.657 true 00:13:35.657 02:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:35.657 02:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.593 02:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.851 02:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:36.851 02:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:37.120 true 00:13:37.120 02:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:37.120 02:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.431 02:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.431 02:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:37.431 02:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:37.702 true 00:13:37.702 02:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:37.702 02:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.653 02:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.911 02:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:38.911 02:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:39.169 true 00:13:39.169 02:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:39.169 02:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.427 02:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.684 02:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:39.684 02:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:39.942 true 00:13:39.943 02:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:39.943 02:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.880 02:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.880 02:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:40.880 02:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:41.138 true 00:13:41.138 02:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:41.138 02:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.397 02:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.655 02:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:41.655 02:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:41.913 true 00:13:41.913 02:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:41.913 02:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.191 02:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.449 02:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:42.449 02:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:42.707 true 00:13:42.707 02:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:42.707 02:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.642 02:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.900 02:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:43.900 02:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:44.158 true 00:13:44.158 02:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:44.158 02:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.416 02:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.674 02:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:44.674 02:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:44.932 true 00:13:44.932 02:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:44.932 02:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.191 02:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.449 02:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:45.449 02:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:45.709 true 00:13:45.709 02:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:45.709 02:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.648 02:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.907 02:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:46.907 02:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:47.165 true 00:13:47.165 02:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:47.165 02:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.424 02:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.993 02:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:47.993 02:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:47.993 true 00:13:48.250 02:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:48.250 02:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.250 02:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.507 02:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:48.507 02:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:48.765 true 00:13:48.765 02:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:48.765 02:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.995 02:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.995 02:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:49.995 02:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:50.253 true 00:13:50.253 02:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:50.253 02:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.511 02:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.769 02:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:50.769 02:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:51.027 true 00:13:51.027 02:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:51.027 02:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.964 02:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.223 02:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:52.223 02:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:52.482 true 00:13:52.482 02:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:52.482 02:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.741 02:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.999 02:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:52.999 02:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:53.256 true 00:13:53.256 02:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:53.256 02:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.514 02:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.772 02:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:53.772 02:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:54.030 true 00:13:54.030 02:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:54.030 02:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.411 02:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.411 02:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:55.411 02:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:55.668 true 00:13:55.668 02:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:55.668 02:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.611 Initializing NVMe Controllers 00:13:56.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:56.611 Controller IO queue size 128, less than required. 00:13:56.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:56.611 Controller IO queue size 128, less than required. 00:13:56.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:56.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:56.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:56.611 Initialization complete. Launching workers. 00:13:56.611 ======================================================== 00:13:56.611 Latency(us) 00:13:56.611 Device Information : IOPS MiB/s Average min max 00:13:56.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1007.57 0.49 66665.76 2343.54 1042773.32 00:13:56.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10780.23 5.26 11873.24 2673.38 456552.11 00:13:56.611 ======================================================== 00:13:56.611 Total : 11787.80 5.76 16556.65 2343.54 1042773.32 00:13:56.611 00:13:56.611 02:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.611 02:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:56.611 02:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:56.869 true 00:13:56.869 02:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1528528 00:13:56.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1528528) - No such process 00:13:56.869 02:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1528528 00:13:56.869 02:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.125 02:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.382 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:57.382 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:57.382 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:57.382 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:57.382 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:57.639 null0 00:13:57.639 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:57.639 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:57.639 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:57.897 null1 00:13:57.897 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:57.897 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:57.897 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:58.154 null2 00:13:58.154 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:58.154 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:58.154 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:58.412 null3 00:13:58.412 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:58.412 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:58.412 02:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:58.670 null4 00:13:58.670 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:58.670 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:58.670 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:58.928 null5 00:13:58.928 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:58.928 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:58.928 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:59.186 null6 00:13:59.186 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:59.186 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:59.186 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:59.445 null7 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1532566 1532567 1532569 1532571 1532573 1532575 1532577 1532579 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.445 02:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.703 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.703 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.703 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.703 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.703 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.703 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.703 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.703 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.961 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:00.218 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:00.218 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:00.218 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:00.219 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:00.219 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:00.219 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:00.219 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:00.219 02:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.477 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:00.735 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:00.735 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:00.735 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:00.735 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:00.735 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:00.735 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.735 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:00.735 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.993 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:01.249 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:01.250 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:01.250 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:01.250 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.250 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:01.250 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:01.250 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:01.250 02:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.506 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:01.763 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:01.763 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:01.763 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:01.763 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.763 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:01.764 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:01.764 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:01.764 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.050 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:02.308 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:02.308 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:02.308 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.308 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:02.308 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:02.308 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.308 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:02.308 02:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:02.565 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.565 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.565 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:02.565 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.565 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.565 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:02.565 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.565 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.565 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:02.565 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.565 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.565 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.565 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.565 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:02.565 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:02.822 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.822 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.822 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.822 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:02.822 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.822 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:02.822 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.822 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.822 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:02.822 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:02.822 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:03.080 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.080 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.080 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:03.080 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:03.080 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:03.080 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.338 02:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:03.596 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:03.596 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:03.596 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.596 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.596 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:03.596 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:03.596 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:03.596 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.855 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:04.114 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.114 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:04.114 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:04.114 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:04.114 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:04.114 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:04.114 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:04.114 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.372 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.373 02:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:04.630 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.630 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:04.630 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:04.630 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:04.630 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:04.630 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:04.630 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:04.630 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:04.888 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.888 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.888 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.888 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.888 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.888 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.888 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.888 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:04.889 rmmod nvme_tcp 00:14:04.889 rmmod nvme_fabrics 00:14:04.889 rmmod nvme_keyring 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1528104 ']' 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1528104 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1528104 ']' 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1528104 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1528104 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1528104' 00:14:04.889 killing process with pid 1528104 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1528104 00:14:04.889 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1528104 00:14:05.148 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:05.148 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:05.148 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:05.148 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.148 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:05.148 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.148 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.148 02:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.713 02:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:07.713 00:14:07.713 real 0m46.752s 00:14:07.713 user 3m32.446s 00:14:07.713 sys 0m16.396s 00:14:07.713 02:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:07.713 02:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.713 ************************************ 00:14:07.713 END TEST nvmf_ns_hotplug_stress 00:14:07.713 ************************************ 00:14:07.713 02:01:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:07.713 02:01:12 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:07.713 02:01:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:07.713 02:01:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.713 02:01:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:07.713 ************************************ 00:14:07.713 START TEST nvmf_connect_stress 00:14:07.713 ************************************ 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:07.713 * Looking for test storage... 00:14:07.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.713 02:01:12 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:07.714 02:01:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:09.630 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.630 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:09.630 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:09.631 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:09.631 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:09.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:14:09.631 00:14:09.631 --- 10.0.0.2 ping statistics --- 00:14:09.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.631 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:14:09.631 00:14:09.631 --- 10.0.0.1 ping statistics --- 00:14:09.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.631 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1535333 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1535333 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1535333 ']' 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:09.631 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.631 [2024-07-14 02:01:15.246324] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:09.631 [2024-07-14 02:01:15.246409] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.631 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.631 [2024-07-14 02:01:15.311475] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:09.891 [2024-07-14 02:01:15.396368] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.891 [2024-07-14 02:01:15.396434] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.891 [2024-07-14 02:01:15.396455] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.891 [2024-07-14 02:01:15.396472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.891 [2024-07-14 02:01:15.396487] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.892 [2024-07-14 02:01:15.396576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.892 [2024-07-14 02:01:15.396641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.892 [2024-07-14 02:01:15.396647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.892 [2024-07-14 02:01:15.542530] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.892 [2024-07-14 02:01:15.575024] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.892 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.152 NULL1 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1535362 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.152 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.412 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.412 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:10.412 02:01:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.412 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.412 02:01:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.672 02:01:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.672 02:01:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:10.672 02:01:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.672 02:01:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.672 02:01:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.931 02:01:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.931 02:01:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:10.931 02:01:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.931 02:01:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.931 02:01:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.499 02:01:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.499 02:01:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:11.499 02:01:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.499 02:01:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.499 02:01:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.759 02:01:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.759 02:01:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:11.759 02:01:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.759 02:01:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.759 02:01:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.019 02:01:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.019 02:01:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:12.019 02:01:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.019 02:01:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.019 02:01:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.277 02:01:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.277 02:01:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:12.277 02:01:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.277 02:01:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.277 02:01:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.535 02:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.535 02:01:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:12.535 02:01:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.535 02:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.535 02:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.103 02:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.103 02:01:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:13.103 02:01:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.103 02:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.103 02:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.362 02:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.362 02:01:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:13.362 02:01:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.362 02:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.362 02:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.620 02:01:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.620 02:01:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:13.620 02:01:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.620 02:01:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.620 02:01:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.878 02:01:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.878 02:01:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:13.878 02:01:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.878 02:01:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.878 02:01:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.138 02:01:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.138 02:01:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:14.138 02:01:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.138 02:01:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.138 02:01:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.706 02:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.706 02:01:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:14.706 02:01:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.706 02:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.706 02:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.964 02:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.964 02:01:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:14.964 02:01:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.964 02:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.964 02:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.222 02:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.222 02:01:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:15.222 02:01:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.222 02:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.222 02:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.481 02:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.481 02:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:15.481 02:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.481 02:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.481 02:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.739 02:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.739 02:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:15.739 02:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.739 02:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.739 02:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.323 02:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.323 02:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:16.323 02:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.323 02:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.323 02:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.582 02:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.582 02:01:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:16.582 02:01:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.582 02:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.582 02:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.840 02:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.840 02:01:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:16.840 02:01:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.840 02:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.840 02:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.100 02:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.100 02:01:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:17.100 02:01:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.100 02:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.100 02:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.359 02:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.359 02:01:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:17.359 02:01:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.359 02:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.359 02:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.926 02:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.926 02:01:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:17.926 02:01:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.926 02:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.926 02:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.186 02:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.186 02:01:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:18.186 02:01:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.186 02:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.186 02:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.445 02:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.445 02:01:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:18.445 02:01:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.445 02:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.445 02:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.705 02:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.705 02:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:18.705 02:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.705 02:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.705 02:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.963 02:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.964 02:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:18.964 02:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.964 02:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.964 02:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.531 02:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.531 02:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:19.531 02:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.531 02:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.531 02:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.791 02:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.791 02:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:19.791 02:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.791 02:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.791 02:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.050 02:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.050 02:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:20.050 02:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.050 02:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.050 02:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.308 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1535362 00:14:20.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1535362) - No such process 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1535362 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:20.308 rmmod nvme_tcp 00:14:20.308 rmmod nvme_fabrics 00:14:20.308 rmmod nvme_keyring 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1535333 ']' 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1535333 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1535333 ']' 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1535333 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:20.308 02:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1535333 00:14:20.566 02:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:20.566 02:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:20.566 02:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1535333' 00:14:20.566 killing process with pid 1535333 00:14:20.566 02:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1535333 00:14:20.566 02:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1535333 00:14:20.566 02:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:20.566 02:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:20.566 02:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:20.566 02:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.566 02:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.566 02:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.566 02:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.566 02:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.119 02:01:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:23.119 00:14:23.119 real 0m15.382s 00:14:23.119 user 0m38.032s 00:14:23.119 sys 0m6.187s 00:14:23.119 02:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:23.119 02:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.119 ************************************ 00:14:23.119 END TEST nvmf_connect_stress 00:14:23.119 ************************************ 00:14:23.119 02:01:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:23.119 02:01:28 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:23.119 02:01:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:23.119 02:01:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.119 02:01:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:23.119 ************************************ 00:14:23.119 START TEST nvmf_fused_ordering 00:14:23.119 ************************************ 00:14:23.119 02:01:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:23.119 * Looking for test storage... 00:14:23.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:23.120 02:01:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:25.026 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:25.026 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:25.026 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:25.026 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.026 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:25.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:14:25.027 00:14:25.027 --- 10.0.0.2 ping statistics --- 00:14:25.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.027 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:14:25.027 00:14:25.027 --- 10.0.0.1 ping statistics --- 00:14:25.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.027 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1538525 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1538525 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1538525 ']' 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:25.027 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.027 [2024-07-14 02:01:30.458215] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:25.027 [2024-07-14 02:01:30.458317] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.027 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.027 [2024-07-14 02:01:30.527995] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.027 [2024-07-14 02:01:30.617597] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.027 [2024-07-14 02:01:30.617663] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.027 [2024-07-14 02:01:30.617688] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.027 [2024-07-14 02:01:30.617709] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.027 [2024-07-14 02:01:30.617729] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.027 [2024-07-14 02:01:30.617780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.287 [2024-07-14 02:01:30.756326] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.287 [2024-07-14 02:01:30.772507] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.287 NULL1 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.287 02:01:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:25.287 [2024-07-14 02:01:30.817770] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:25.287 [2024-07-14 02:01:30.817814] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1538645 ] 00:14:25.287 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.856 Attached to nqn.2016-06.io.spdk:cnode1 00:14:25.856 Namespace ID: 1 size: 1GB 00:14:25.856 fused_ordering(0) 00:14:25.856 fused_ordering(1) 00:14:25.856 fused_ordering(2) 00:14:25.856 fused_ordering(3) 00:14:25.856 fused_ordering(4) 00:14:25.856 fused_ordering(5) 00:14:25.856 fused_ordering(6) 00:14:25.856 fused_ordering(7) 00:14:25.856 fused_ordering(8) 00:14:25.857 fused_ordering(9) 00:14:25.857 fused_ordering(10) 00:14:25.857 fused_ordering(11) 00:14:25.857 fused_ordering(12) 00:14:25.857 fused_ordering(13) 00:14:25.857 fused_ordering(14) 00:14:25.857 fused_ordering(15) 00:14:25.857 fused_ordering(16) 00:14:25.857 fused_ordering(17) 00:14:25.857 fused_ordering(18) 00:14:25.857 fused_ordering(19) 00:14:25.857 fused_ordering(20) 00:14:25.857 fused_ordering(21) 00:14:25.857 fused_ordering(22) 00:14:25.857 fused_ordering(23) 00:14:25.857 fused_ordering(24) 00:14:25.857 fused_ordering(25) 00:14:25.857 fused_ordering(26) 00:14:25.857 fused_ordering(27) 00:14:25.857 fused_ordering(28) 00:14:25.857 fused_ordering(29) 00:14:25.857 fused_ordering(30) 00:14:25.857 fused_ordering(31) 00:14:25.857 fused_ordering(32) 00:14:25.857 fused_ordering(33) 00:14:25.857 fused_ordering(34) 00:14:25.857 fused_ordering(35) 00:14:25.857 fused_ordering(36) 00:14:25.857 fused_ordering(37) 00:14:25.857 fused_ordering(38) 00:14:25.857 fused_ordering(39) 00:14:25.857 fused_ordering(40) 00:14:25.857 fused_ordering(41) 00:14:25.857 fused_ordering(42) 00:14:25.857 fused_ordering(43) 00:14:25.857 fused_ordering(44) 00:14:25.857 fused_ordering(45) 00:14:25.857 fused_ordering(46) 00:14:25.857 fused_ordering(47) 00:14:25.857 fused_ordering(48) 00:14:25.857 fused_ordering(49) 00:14:25.857 fused_ordering(50) 00:14:25.857 fused_ordering(51) 00:14:25.857 fused_ordering(52) 00:14:25.857 fused_ordering(53) 00:14:25.857 fused_ordering(54) 00:14:25.857 fused_ordering(55) 00:14:25.857 fused_ordering(56) 00:14:25.857 fused_ordering(57) 00:14:25.857 fused_ordering(58) 00:14:25.857 fused_ordering(59) 00:14:25.857 fused_ordering(60) 00:14:25.857 fused_ordering(61) 00:14:25.857 fused_ordering(62) 00:14:25.857 fused_ordering(63) 00:14:25.857 fused_ordering(64) 00:14:25.857 fused_ordering(65) 00:14:25.857 fused_ordering(66) 00:14:25.857 fused_ordering(67) 00:14:25.857 fused_ordering(68) 00:14:25.857 fused_ordering(69) 00:14:25.857 fused_ordering(70) 00:14:25.857 fused_ordering(71) 00:14:25.857 fused_ordering(72) 00:14:25.857 fused_ordering(73) 00:14:25.857 fused_ordering(74) 00:14:25.857 fused_ordering(75) 00:14:25.857 fused_ordering(76) 00:14:25.857 fused_ordering(77) 00:14:25.857 fused_ordering(78) 00:14:25.857 fused_ordering(79) 00:14:25.857 fused_ordering(80) 00:14:25.857 fused_ordering(81) 00:14:25.857 fused_ordering(82) 00:14:25.857 fused_ordering(83) 00:14:25.857 fused_ordering(84) 00:14:25.857 fused_ordering(85) 00:14:25.857 fused_ordering(86) 00:14:25.857 fused_ordering(87) 00:14:25.857 fused_ordering(88) 00:14:25.857 fused_ordering(89) 00:14:25.857 fused_ordering(90) 00:14:25.857 fused_ordering(91) 00:14:25.857 fused_ordering(92) 00:14:25.857 fused_ordering(93) 00:14:25.857 fused_ordering(94) 00:14:25.857 fused_ordering(95) 00:14:25.857 fused_ordering(96) 00:14:25.857 fused_ordering(97) 00:14:25.857 fused_ordering(98) 00:14:25.857 fused_ordering(99) 00:14:25.857 fused_ordering(100) 00:14:25.857 fused_ordering(101) 00:14:25.857 fused_ordering(102) 00:14:25.857 fused_ordering(103) 00:14:25.857 fused_ordering(104) 00:14:25.857 fused_ordering(105) 00:14:25.857 fused_ordering(106) 00:14:25.857 fused_ordering(107) 00:14:25.857 fused_ordering(108) 00:14:25.857 fused_ordering(109) 00:14:25.857 fused_ordering(110) 00:14:25.857 fused_ordering(111) 00:14:25.857 fused_ordering(112) 00:14:25.857 fused_ordering(113) 00:14:25.857 fused_ordering(114) 00:14:25.857 fused_ordering(115) 00:14:25.857 fused_ordering(116) 00:14:25.857 fused_ordering(117) 00:14:25.857 fused_ordering(118) 00:14:25.857 fused_ordering(119) 00:14:25.857 fused_ordering(120) 00:14:25.857 fused_ordering(121) 00:14:25.857 fused_ordering(122) 00:14:25.857 fused_ordering(123) 00:14:25.857 fused_ordering(124) 00:14:25.857 fused_ordering(125) 00:14:25.857 fused_ordering(126) 00:14:25.857 fused_ordering(127) 00:14:25.857 fused_ordering(128) 00:14:25.857 fused_ordering(129) 00:14:25.857 fused_ordering(130) 00:14:25.857 fused_ordering(131) 00:14:25.857 fused_ordering(132) 00:14:25.857 fused_ordering(133) 00:14:25.857 fused_ordering(134) 00:14:25.857 fused_ordering(135) 00:14:25.857 fused_ordering(136) 00:14:25.857 fused_ordering(137) 00:14:25.857 fused_ordering(138) 00:14:25.857 fused_ordering(139) 00:14:25.857 fused_ordering(140) 00:14:25.857 fused_ordering(141) 00:14:25.857 fused_ordering(142) 00:14:25.857 fused_ordering(143) 00:14:25.857 fused_ordering(144) 00:14:25.857 fused_ordering(145) 00:14:25.857 fused_ordering(146) 00:14:25.857 fused_ordering(147) 00:14:25.857 fused_ordering(148) 00:14:25.857 fused_ordering(149) 00:14:25.857 fused_ordering(150) 00:14:25.857 fused_ordering(151) 00:14:25.857 fused_ordering(152) 00:14:25.857 fused_ordering(153) 00:14:25.857 fused_ordering(154) 00:14:25.857 fused_ordering(155) 00:14:25.857 fused_ordering(156) 00:14:25.857 fused_ordering(157) 00:14:25.857 fused_ordering(158) 00:14:25.857 fused_ordering(159) 00:14:25.857 fused_ordering(160) 00:14:25.857 fused_ordering(161) 00:14:25.857 fused_ordering(162) 00:14:25.857 fused_ordering(163) 00:14:25.857 fused_ordering(164) 00:14:25.857 fused_ordering(165) 00:14:25.857 fused_ordering(166) 00:14:25.857 fused_ordering(167) 00:14:25.857 fused_ordering(168) 00:14:25.857 fused_ordering(169) 00:14:25.857 fused_ordering(170) 00:14:25.857 fused_ordering(171) 00:14:25.857 fused_ordering(172) 00:14:25.857 fused_ordering(173) 00:14:25.857 fused_ordering(174) 00:14:25.857 fused_ordering(175) 00:14:25.857 fused_ordering(176) 00:14:25.857 fused_ordering(177) 00:14:25.857 fused_ordering(178) 00:14:25.857 fused_ordering(179) 00:14:25.857 fused_ordering(180) 00:14:25.857 fused_ordering(181) 00:14:25.857 fused_ordering(182) 00:14:25.857 fused_ordering(183) 00:14:25.857 fused_ordering(184) 00:14:25.857 fused_ordering(185) 00:14:25.857 fused_ordering(186) 00:14:25.857 fused_ordering(187) 00:14:25.857 fused_ordering(188) 00:14:25.857 fused_ordering(189) 00:14:25.857 fused_ordering(190) 00:14:25.857 fused_ordering(191) 00:14:25.857 fused_ordering(192) 00:14:25.857 fused_ordering(193) 00:14:25.857 fused_ordering(194) 00:14:25.857 fused_ordering(195) 00:14:25.857 fused_ordering(196) 00:14:25.857 fused_ordering(197) 00:14:25.857 fused_ordering(198) 00:14:25.857 fused_ordering(199) 00:14:25.857 fused_ordering(200) 00:14:25.857 fused_ordering(201) 00:14:25.857 fused_ordering(202) 00:14:25.857 fused_ordering(203) 00:14:25.857 fused_ordering(204) 00:14:25.857 fused_ordering(205) 00:14:26.425 fused_ordering(206) 00:14:26.425 fused_ordering(207) 00:14:26.425 fused_ordering(208) 00:14:26.425 fused_ordering(209) 00:14:26.426 fused_ordering(210) 00:14:26.426 fused_ordering(211) 00:14:26.426 fused_ordering(212) 00:14:26.426 fused_ordering(213) 00:14:26.426 fused_ordering(214) 00:14:26.426 fused_ordering(215) 00:14:26.426 fused_ordering(216) 00:14:26.426 fused_ordering(217) 00:14:26.426 fused_ordering(218) 00:14:26.426 fused_ordering(219) 00:14:26.426 fused_ordering(220) 00:14:26.426 fused_ordering(221) 00:14:26.426 fused_ordering(222) 00:14:26.426 fused_ordering(223) 00:14:26.426 fused_ordering(224) 00:14:26.426 fused_ordering(225) 00:14:26.426 fused_ordering(226) 00:14:26.426 fused_ordering(227) 00:14:26.426 fused_ordering(228) 00:14:26.426 fused_ordering(229) 00:14:26.426 fused_ordering(230) 00:14:26.426 fused_ordering(231) 00:14:26.426 fused_ordering(232) 00:14:26.426 fused_ordering(233) 00:14:26.426 fused_ordering(234) 00:14:26.426 fused_ordering(235) 00:14:26.426 fused_ordering(236) 00:14:26.426 fused_ordering(237) 00:14:26.426 fused_ordering(238) 00:14:26.426 fused_ordering(239) 00:14:26.426 fused_ordering(240) 00:14:26.426 fused_ordering(241) 00:14:26.426 fused_ordering(242) 00:14:26.426 fused_ordering(243) 00:14:26.426 fused_ordering(244) 00:14:26.426 fused_ordering(245) 00:14:26.426 fused_ordering(246) 00:14:26.426 fused_ordering(247) 00:14:26.426 fused_ordering(248) 00:14:26.426 fused_ordering(249) 00:14:26.426 fused_ordering(250) 00:14:26.426 fused_ordering(251) 00:14:26.426 fused_ordering(252) 00:14:26.426 fused_ordering(253) 00:14:26.426 fused_ordering(254) 00:14:26.426 fused_ordering(255) 00:14:26.426 fused_ordering(256) 00:14:26.426 fused_ordering(257) 00:14:26.426 fused_ordering(258) 00:14:26.426 fused_ordering(259) 00:14:26.426 fused_ordering(260) 00:14:26.426 fused_ordering(261) 00:14:26.426 fused_ordering(262) 00:14:26.426 fused_ordering(263) 00:14:26.426 fused_ordering(264) 00:14:26.426 fused_ordering(265) 00:14:26.426 fused_ordering(266) 00:14:26.426 fused_ordering(267) 00:14:26.426 fused_ordering(268) 00:14:26.426 fused_ordering(269) 00:14:26.426 fused_ordering(270) 00:14:26.426 fused_ordering(271) 00:14:26.426 fused_ordering(272) 00:14:26.426 fused_ordering(273) 00:14:26.426 fused_ordering(274) 00:14:26.426 fused_ordering(275) 00:14:26.426 fused_ordering(276) 00:14:26.426 fused_ordering(277) 00:14:26.426 fused_ordering(278) 00:14:26.426 fused_ordering(279) 00:14:26.426 fused_ordering(280) 00:14:26.426 fused_ordering(281) 00:14:26.426 fused_ordering(282) 00:14:26.426 fused_ordering(283) 00:14:26.426 fused_ordering(284) 00:14:26.426 fused_ordering(285) 00:14:26.426 fused_ordering(286) 00:14:26.426 fused_ordering(287) 00:14:26.426 fused_ordering(288) 00:14:26.426 fused_ordering(289) 00:14:26.426 fused_ordering(290) 00:14:26.426 fused_ordering(291) 00:14:26.426 fused_ordering(292) 00:14:26.426 fused_ordering(293) 00:14:26.426 fused_ordering(294) 00:14:26.426 fused_ordering(295) 00:14:26.426 fused_ordering(296) 00:14:26.426 fused_ordering(297) 00:14:26.426 fused_ordering(298) 00:14:26.426 fused_ordering(299) 00:14:26.426 fused_ordering(300) 00:14:26.426 fused_ordering(301) 00:14:26.426 fused_ordering(302) 00:14:26.426 fused_ordering(303) 00:14:26.426 fused_ordering(304) 00:14:26.426 fused_ordering(305) 00:14:26.426 fused_ordering(306) 00:14:26.426 fused_ordering(307) 00:14:26.426 fused_ordering(308) 00:14:26.426 fused_ordering(309) 00:14:26.426 fused_ordering(310) 00:14:26.426 fused_ordering(311) 00:14:26.426 fused_ordering(312) 00:14:26.426 fused_ordering(313) 00:14:26.426 fused_ordering(314) 00:14:26.426 fused_ordering(315) 00:14:26.426 fused_ordering(316) 00:14:26.426 fused_ordering(317) 00:14:26.426 fused_ordering(318) 00:14:26.426 fused_ordering(319) 00:14:26.426 fused_ordering(320) 00:14:26.426 fused_ordering(321) 00:14:26.426 fused_ordering(322) 00:14:26.426 fused_ordering(323) 00:14:26.426 fused_ordering(324) 00:14:26.426 fused_ordering(325) 00:14:26.426 fused_ordering(326) 00:14:26.426 fused_ordering(327) 00:14:26.426 fused_ordering(328) 00:14:26.426 fused_ordering(329) 00:14:26.426 fused_ordering(330) 00:14:26.426 fused_ordering(331) 00:14:26.426 fused_ordering(332) 00:14:26.426 fused_ordering(333) 00:14:26.426 fused_ordering(334) 00:14:26.426 fused_ordering(335) 00:14:26.426 fused_ordering(336) 00:14:26.426 fused_ordering(337) 00:14:26.426 fused_ordering(338) 00:14:26.426 fused_ordering(339) 00:14:26.426 fused_ordering(340) 00:14:26.426 fused_ordering(341) 00:14:26.426 fused_ordering(342) 00:14:26.426 fused_ordering(343) 00:14:26.426 fused_ordering(344) 00:14:26.426 fused_ordering(345) 00:14:26.426 fused_ordering(346) 00:14:26.426 fused_ordering(347) 00:14:26.426 fused_ordering(348) 00:14:26.426 fused_ordering(349) 00:14:26.426 fused_ordering(350) 00:14:26.426 fused_ordering(351) 00:14:26.426 fused_ordering(352) 00:14:26.426 fused_ordering(353) 00:14:26.426 fused_ordering(354) 00:14:26.426 fused_ordering(355) 00:14:26.426 fused_ordering(356) 00:14:26.426 fused_ordering(357) 00:14:26.426 fused_ordering(358) 00:14:26.426 fused_ordering(359) 00:14:26.426 fused_ordering(360) 00:14:26.426 fused_ordering(361) 00:14:26.426 fused_ordering(362) 00:14:26.426 fused_ordering(363) 00:14:26.426 fused_ordering(364) 00:14:26.426 fused_ordering(365) 00:14:26.426 fused_ordering(366) 00:14:26.426 fused_ordering(367) 00:14:26.426 fused_ordering(368) 00:14:26.426 fused_ordering(369) 00:14:26.426 fused_ordering(370) 00:14:26.426 fused_ordering(371) 00:14:26.426 fused_ordering(372) 00:14:26.426 fused_ordering(373) 00:14:26.426 fused_ordering(374) 00:14:26.426 fused_ordering(375) 00:14:26.426 fused_ordering(376) 00:14:26.426 fused_ordering(377) 00:14:26.426 fused_ordering(378) 00:14:26.426 fused_ordering(379) 00:14:26.426 fused_ordering(380) 00:14:26.426 fused_ordering(381) 00:14:26.426 fused_ordering(382) 00:14:26.426 fused_ordering(383) 00:14:26.426 fused_ordering(384) 00:14:26.426 fused_ordering(385) 00:14:26.426 fused_ordering(386) 00:14:26.426 fused_ordering(387) 00:14:26.426 fused_ordering(388) 00:14:26.426 fused_ordering(389) 00:14:26.426 fused_ordering(390) 00:14:26.426 fused_ordering(391) 00:14:26.426 fused_ordering(392) 00:14:26.426 fused_ordering(393) 00:14:26.426 fused_ordering(394) 00:14:26.426 fused_ordering(395) 00:14:26.426 fused_ordering(396) 00:14:26.426 fused_ordering(397) 00:14:26.426 fused_ordering(398) 00:14:26.426 fused_ordering(399) 00:14:26.426 fused_ordering(400) 00:14:26.426 fused_ordering(401) 00:14:26.426 fused_ordering(402) 00:14:26.426 fused_ordering(403) 00:14:26.426 fused_ordering(404) 00:14:26.426 fused_ordering(405) 00:14:26.426 fused_ordering(406) 00:14:26.426 fused_ordering(407) 00:14:26.426 fused_ordering(408) 00:14:26.426 fused_ordering(409) 00:14:26.426 fused_ordering(410) 00:14:27.365 fused_ordering(411) 00:14:27.365 fused_ordering(412) 00:14:27.365 fused_ordering(413) 00:14:27.365 fused_ordering(414) 00:14:27.365 fused_ordering(415) 00:14:27.365 fused_ordering(416) 00:14:27.365 fused_ordering(417) 00:14:27.365 fused_ordering(418) 00:14:27.365 fused_ordering(419) 00:14:27.365 fused_ordering(420) 00:14:27.365 fused_ordering(421) 00:14:27.365 fused_ordering(422) 00:14:27.365 fused_ordering(423) 00:14:27.365 fused_ordering(424) 00:14:27.365 fused_ordering(425) 00:14:27.365 fused_ordering(426) 00:14:27.365 fused_ordering(427) 00:14:27.365 fused_ordering(428) 00:14:27.365 fused_ordering(429) 00:14:27.365 fused_ordering(430) 00:14:27.365 fused_ordering(431) 00:14:27.365 fused_ordering(432) 00:14:27.365 fused_ordering(433) 00:14:27.365 fused_ordering(434) 00:14:27.365 fused_ordering(435) 00:14:27.365 fused_ordering(436) 00:14:27.365 fused_ordering(437) 00:14:27.365 fused_ordering(438) 00:14:27.365 fused_ordering(439) 00:14:27.365 fused_ordering(440) 00:14:27.365 fused_ordering(441) 00:14:27.365 fused_ordering(442) 00:14:27.365 fused_ordering(443) 00:14:27.365 fused_ordering(444) 00:14:27.365 fused_ordering(445) 00:14:27.365 fused_ordering(446) 00:14:27.365 fused_ordering(447) 00:14:27.365 fused_ordering(448) 00:14:27.365 fused_ordering(449) 00:14:27.365 fused_ordering(450) 00:14:27.365 fused_ordering(451) 00:14:27.365 fused_ordering(452) 00:14:27.365 fused_ordering(453) 00:14:27.365 fused_ordering(454) 00:14:27.365 fused_ordering(455) 00:14:27.365 fused_ordering(456) 00:14:27.365 fused_ordering(457) 00:14:27.365 fused_ordering(458) 00:14:27.365 fused_ordering(459) 00:14:27.365 fused_ordering(460) 00:14:27.365 fused_ordering(461) 00:14:27.365 fused_ordering(462) 00:14:27.365 fused_ordering(463) 00:14:27.365 fused_ordering(464) 00:14:27.365 fused_ordering(465) 00:14:27.365 fused_ordering(466) 00:14:27.365 fused_ordering(467) 00:14:27.365 fused_ordering(468) 00:14:27.365 fused_ordering(469) 00:14:27.365 fused_ordering(470) 00:14:27.365 fused_ordering(471) 00:14:27.365 fused_ordering(472) 00:14:27.365 fused_ordering(473) 00:14:27.365 fused_ordering(474) 00:14:27.365 fused_ordering(475) 00:14:27.365 fused_ordering(476) 00:14:27.365 fused_ordering(477) 00:14:27.365 fused_ordering(478) 00:14:27.365 fused_ordering(479) 00:14:27.365 fused_ordering(480) 00:14:27.365 fused_ordering(481) 00:14:27.365 fused_ordering(482) 00:14:27.365 fused_ordering(483) 00:14:27.365 fused_ordering(484) 00:14:27.365 fused_ordering(485) 00:14:27.365 fused_ordering(486) 00:14:27.365 fused_ordering(487) 00:14:27.365 fused_ordering(488) 00:14:27.365 fused_ordering(489) 00:14:27.365 fused_ordering(490) 00:14:27.365 fused_ordering(491) 00:14:27.365 fused_ordering(492) 00:14:27.365 fused_ordering(493) 00:14:27.365 fused_ordering(494) 00:14:27.365 fused_ordering(495) 00:14:27.365 fused_ordering(496) 00:14:27.365 fused_ordering(497) 00:14:27.365 fused_ordering(498) 00:14:27.365 fused_ordering(499) 00:14:27.365 fused_ordering(500) 00:14:27.365 fused_ordering(501) 00:14:27.365 fused_ordering(502) 00:14:27.365 fused_ordering(503) 00:14:27.365 fused_ordering(504) 00:14:27.365 fused_ordering(505) 00:14:27.365 fused_ordering(506) 00:14:27.365 fused_ordering(507) 00:14:27.365 fused_ordering(508) 00:14:27.365 fused_ordering(509) 00:14:27.365 fused_ordering(510) 00:14:27.365 fused_ordering(511) 00:14:27.365 fused_ordering(512) 00:14:27.365 fused_ordering(513) 00:14:27.365 fused_ordering(514) 00:14:27.365 fused_ordering(515) 00:14:27.365 fused_ordering(516) 00:14:27.365 fused_ordering(517) 00:14:27.365 fused_ordering(518) 00:14:27.365 fused_ordering(519) 00:14:27.365 fused_ordering(520) 00:14:27.365 fused_ordering(521) 00:14:27.365 fused_ordering(522) 00:14:27.365 fused_ordering(523) 00:14:27.365 fused_ordering(524) 00:14:27.365 fused_ordering(525) 00:14:27.365 fused_ordering(526) 00:14:27.365 fused_ordering(527) 00:14:27.365 fused_ordering(528) 00:14:27.365 fused_ordering(529) 00:14:27.365 fused_ordering(530) 00:14:27.365 fused_ordering(531) 00:14:27.365 fused_ordering(532) 00:14:27.365 fused_ordering(533) 00:14:27.365 fused_ordering(534) 00:14:27.365 fused_ordering(535) 00:14:27.365 fused_ordering(536) 00:14:27.365 fused_ordering(537) 00:14:27.365 fused_ordering(538) 00:14:27.365 fused_ordering(539) 00:14:27.365 fused_ordering(540) 00:14:27.365 fused_ordering(541) 00:14:27.365 fused_ordering(542) 00:14:27.365 fused_ordering(543) 00:14:27.365 fused_ordering(544) 00:14:27.365 fused_ordering(545) 00:14:27.365 fused_ordering(546) 00:14:27.365 fused_ordering(547) 00:14:27.365 fused_ordering(548) 00:14:27.365 fused_ordering(549) 00:14:27.365 fused_ordering(550) 00:14:27.365 fused_ordering(551) 00:14:27.365 fused_ordering(552) 00:14:27.365 fused_ordering(553) 00:14:27.365 fused_ordering(554) 00:14:27.365 fused_ordering(555) 00:14:27.365 fused_ordering(556) 00:14:27.365 fused_ordering(557) 00:14:27.365 fused_ordering(558) 00:14:27.365 fused_ordering(559) 00:14:27.365 fused_ordering(560) 00:14:27.365 fused_ordering(561) 00:14:27.365 fused_ordering(562) 00:14:27.365 fused_ordering(563) 00:14:27.365 fused_ordering(564) 00:14:27.365 fused_ordering(565) 00:14:27.365 fused_ordering(566) 00:14:27.365 fused_ordering(567) 00:14:27.365 fused_ordering(568) 00:14:27.365 fused_ordering(569) 00:14:27.365 fused_ordering(570) 00:14:27.365 fused_ordering(571) 00:14:27.365 fused_ordering(572) 00:14:27.365 fused_ordering(573) 00:14:27.365 fused_ordering(574) 00:14:27.365 fused_ordering(575) 00:14:27.365 fused_ordering(576) 00:14:27.365 fused_ordering(577) 00:14:27.365 fused_ordering(578) 00:14:27.365 fused_ordering(579) 00:14:27.365 fused_ordering(580) 00:14:27.365 fused_ordering(581) 00:14:27.365 fused_ordering(582) 00:14:27.365 fused_ordering(583) 00:14:27.365 fused_ordering(584) 00:14:27.365 fused_ordering(585) 00:14:27.365 fused_ordering(586) 00:14:27.365 fused_ordering(587) 00:14:27.365 fused_ordering(588) 00:14:27.365 fused_ordering(589) 00:14:27.365 fused_ordering(590) 00:14:27.365 fused_ordering(591) 00:14:27.365 fused_ordering(592) 00:14:27.365 fused_ordering(593) 00:14:27.365 fused_ordering(594) 00:14:27.365 fused_ordering(595) 00:14:27.365 fused_ordering(596) 00:14:27.365 fused_ordering(597) 00:14:27.365 fused_ordering(598) 00:14:27.365 fused_ordering(599) 00:14:27.365 fused_ordering(600) 00:14:27.365 fused_ordering(601) 00:14:27.365 fused_ordering(602) 00:14:27.365 fused_ordering(603) 00:14:27.365 fused_ordering(604) 00:14:27.365 fused_ordering(605) 00:14:27.365 fused_ordering(606) 00:14:27.365 fused_ordering(607) 00:14:27.365 fused_ordering(608) 00:14:27.365 fused_ordering(609) 00:14:27.365 fused_ordering(610) 00:14:27.365 fused_ordering(611) 00:14:27.365 fused_ordering(612) 00:14:27.365 fused_ordering(613) 00:14:27.365 fused_ordering(614) 00:14:27.365 fused_ordering(615) 00:14:27.933 fused_ordering(616) 00:14:27.933 fused_ordering(617) 00:14:27.933 fused_ordering(618) 00:14:27.933 fused_ordering(619) 00:14:27.933 fused_ordering(620) 00:14:27.933 fused_ordering(621) 00:14:27.933 fused_ordering(622) 00:14:27.933 fused_ordering(623) 00:14:27.933 fused_ordering(624) 00:14:27.933 fused_ordering(625) 00:14:27.933 fused_ordering(626) 00:14:27.933 fused_ordering(627) 00:14:27.933 fused_ordering(628) 00:14:27.933 fused_ordering(629) 00:14:27.933 fused_ordering(630) 00:14:27.933 fused_ordering(631) 00:14:27.933 fused_ordering(632) 00:14:27.933 fused_ordering(633) 00:14:27.933 fused_ordering(634) 00:14:27.933 fused_ordering(635) 00:14:27.933 fused_ordering(636) 00:14:27.933 fused_ordering(637) 00:14:27.933 fused_ordering(638) 00:14:27.933 fused_ordering(639) 00:14:27.933 fused_ordering(640) 00:14:27.933 fused_ordering(641) 00:14:27.933 fused_ordering(642) 00:14:27.933 fused_ordering(643) 00:14:27.933 fused_ordering(644) 00:14:27.933 fused_ordering(645) 00:14:27.933 fused_ordering(646) 00:14:27.933 fused_ordering(647) 00:14:27.933 fused_ordering(648) 00:14:27.933 fused_ordering(649) 00:14:27.933 fused_ordering(650) 00:14:27.933 fused_ordering(651) 00:14:27.933 fused_ordering(652) 00:14:27.933 fused_ordering(653) 00:14:27.933 fused_ordering(654) 00:14:27.933 fused_ordering(655) 00:14:27.933 fused_ordering(656) 00:14:27.933 fused_ordering(657) 00:14:27.933 fused_ordering(658) 00:14:27.933 fused_ordering(659) 00:14:27.933 fused_ordering(660) 00:14:27.933 fused_ordering(661) 00:14:27.933 fused_ordering(662) 00:14:27.933 fused_ordering(663) 00:14:27.933 fused_ordering(664) 00:14:27.933 fused_ordering(665) 00:14:27.933 fused_ordering(666) 00:14:27.933 fused_ordering(667) 00:14:27.933 fused_ordering(668) 00:14:27.933 fused_ordering(669) 00:14:27.933 fused_ordering(670) 00:14:27.933 fused_ordering(671) 00:14:27.933 fused_ordering(672) 00:14:27.933 fused_ordering(673) 00:14:27.933 fused_ordering(674) 00:14:27.933 fused_ordering(675) 00:14:27.933 fused_ordering(676) 00:14:27.933 fused_ordering(677) 00:14:27.933 fused_ordering(678) 00:14:27.933 fused_ordering(679) 00:14:27.933 fused_ordering(680) 00:14:27.933 fused_ordering(681) 00:14:27.933 fused_ordering(682) 00:14:27.933 fused_ordering(683) 00:14:27.933 fused_ordering(684) 00:14:27.933 fused_ordering(685) 00:14:27.933 fused_ordering(686) 00:14:27.933 fused_ordering(687) 00:14:27.933 fused_ordering(688) 00:14:27.933 fused_ordering(689) 00:14:27.933 fused_ordering(690) 00:14:27.933 fused_ordering(691) 00:14:27.933 fused_ordering(692) 00:14:27.933 fused_ordering(693) 00:14:27.933 fused_ordering(694) 00:14:27.933 fused_ordering(695) 00:14:27.933 fused_ordering(696) 00:14:27.933 fused_ordering(697) 00:14:27.933 fused_ordering(698) 00:14:27.933 fused_ordering(699) 00:14:27.933 fused_ordering(700) 00:14:27.933 fused_ordering(701) 00:14:27.933 fused_ordering(702) 00:14:27.933 fused_ordering(703) 00:14:27.933 fused_ordering(704) 00:14:27.933 fused_ordering(705) 00:14:27.933 fused_ordering(706) 00:14:27.933 fused_ordering(707) 00:14:27.933 fused_ordering(708) 00:14:27.933 fused_ordering(709) 00:14:27.933 fused_ordering(710) 00:14:27.933 fused_ordering(711) 00:14:27.933 fused_ordering(712) 00:14:27.933 fused_ordering(713) 00:14:27.933 fused_ordering(714) 00:14:27.933 fused_ordering(715) 00:14:27.933 fused_ordering(716) 00:14:27.933 fused_ordering(717) 00:14:27.933 fused_ordering(718) 00:14:27.933 fused_ordering(719) 00:14:27.933 fused_ordering(720) 00:14:27.933 fused_ordering(721) 00:14:27.933 fused_ordering(722) 00:14:27.933 fused_ordering(723) 00:14:27.933 fused_ordering(724) 00:14:27.933 fused_ordering(725) 00:14:27.933 fused_ordering(726) 00:14:27.933 fused_ordering(727) 00:14:27.933 fused_ordering(728) 00:14:27.933 fused_ordering(729) 00:14:27.933 fused_ordering(730) 00:14:27.933 fused_ordering(731) 00:14:27.933 fused_ordering(732) 00:14:27.933 fused_ordering(733) 00:14:27.933 fused_ordering(734) 00:14:27.933 fused_ordering(735) 00:14:27.933 fused_ordering(736) 00:14:27.933 fused_ordering(737) 00:14:27.933 fused_ordering(738) 00:14:27.933 fused_ordering(739) 00:14:27.933 fused_ordering(740) 00:14:27.933 fused_ordering(741) 00:14:27.933 fused_ordering(742) 00:14:27.933 fused_ordering(743) 00:14:27.933 fused_ordering(744) 00:14:27.933 fused_ordering(745) 00:14:27.933 fused_ordering(746) 00:14:27.933 fused_ordering(747) 00:14:27.933 fused_ordering(748) 00:14:27.933 fused_ordering(749) 00:14:27.933 fused_ordering(750) 00:14:27.933 fused_ordering(751) 00:14:27.933 fused_ordering(752) 00:14:27.933 fused_ordering(753) 00:14:27.933 fused_ordering(754) 00:14:27.933 fused_ordering(755) 00:14:27.933 fused_ordering(756) 00:14:27.933 fused_ordering(757) 00:14:27.933 fused_ordering(758) 00:14:27.933 fused_ordering(759) 00:14:27.933 fused_ordering(760) 00:14:27.933 fused_ordering(761) 00:14:27.933 fused_ordering(762) 00:14:27.933 fused_ordering(763) 00:14:27.933 fused_ordering(764) 00:14:27.933 fused_ordering(765) 00:14:27.933 fused_ordering(766) 00:14:27.933 fused_ordering(767) 00:14:27.933 fused_ordering(768) 00:14:27.933 fused_ordering(769) 00:14:27.933 fused_ordering(770) 00:14:27.933 fused_ordering(771) 00:14:27.933 fused_ordering(772) 00:14:27.933 fused_ordering(773) 00:14:27.933 fused_ordering(774) 00:14:27.933 fused_ordering(775) 00:14:27.933 fused_ordering(776) 00:14:27.933 fused_ordering(777) 00:14:27.933 fused_ordering(778) 00:14:27.933 fused_ordering(779) 00:14:27.933 fused_ordering(780) 00:14:27.933 fused_ordering(781) 00:14:27.933 fused_ordering(782) 00:14:27.933 fused_ordering(783) 00:14:27.933 fused_ordering(784) 00:14:27.933 fused_ordering(785) 00:14:27.933 fused_ordering(786) 00:14:27.933 fused_ordering(787) 00:14:27.933 fused_ordering(788) 00:14:27.933 fused_ordering(789) 00:14:27.933 fused_ordering(790) 00:14:27.933 fused_ordering(791) 00:14:27.933 fused_ordering(792) 00:14:27.933 fused_ordering(793) 00:14:27.933 fused_ordering(794) 00:14:27.933 fused_ordering(795) 00:14:27.933 fused_ordering(796) 00:14:27.933 fused_ordering(797) 00:14:27.933 fused_ordering(798) 00:14:27.933 fused_ordering(799) 00:14:27.933 fused_ordering(800) 00:14:27.933 fused_ordering(801) 00:14:27.933 fused_ordering(802) 00:14:27.933 fused_ordering(803) 00:14:27.933 fused_ordering(804) 00:14:27.933 fused_ordering(805) 00:14:27.933 fused_ordering(806) 00:14:27.933 fused_ordering(807) 00:14:27.933 fused_ordering(808) 00:14:27.933 fused_ordering(809) 00:14:27.933 fused_ordering(810) 00:14:27.933 fused_ordering(811) 00:14:27.933 fused_ordering(812) 00:14:27.933 fused_ordering(813) 00:14:27.933 fused_ordering(814) 00:14:27.933 fused_ordering(815) 00:14:27.933 fused_ordering(816) 00:14:27.933 fused_ordering(817) 00:14:27.933 fused_ordering(818) 00:14:27.933 fused_ordering(819) 00:14:27.933 fused_ordering(820) 00:14:28.870 fused_ordering(821) 00:14:28.871 fused_ordering(822) 00:14:28.871 fused_ordering(823) 00:14:28.871 fused_ordering(824) 00:14:28.871 fused_ordering(825) 00:14:28.871 fused_ordering(826) 00:14:28.871 fused_ordering(827) 00:14:28.871 fused_ordering(828) 00:14:28.871 fused_ordering(829) 00:14:28.871 fused_ordering(830) 00:14:28.871 fused_ordering(831) 00:14:28.871 fused_ordering(832) 00:14:28.871 fused_ordering(833) 00:14:28.871 fused_ordering(834) 00:14:28.871 fused_ordering(835) 00:14:28.871 fused_ordering(836) 00:14:28.871 fused_ordering(837) 00:14:28.871 fused_ordering(838) 00:14:28.871 fused_ordering(839) 00:14:28.871 fused_ordering(840) 00:14:28.871 fused_ordering(841) 00:14:28.871 fused_ordering(842) 00:14:28.871 fused_ordering(843) 00:14:28.871 fused_ordering(844) 00:14:28.871 fused_ordering(845) 00:14:28.871 fused_ordering(846) 00:14:28.871 fused_ordering(847) 00:14:28.871 fused_ordering(848) 00:14:28.871 fused_ordering(849) 00:14:28.871 fused_ordering(850) 00:14:28.871 fused_ordering(851) 00:14:28.871 fused_ordering(852) 00:14:28.871 fused_ordering(853) 00:14:28.871 fused_ordering(854) 00:14:28.871 fused_ordering(855) 00:14:28.871 fused_ordering(856) 00:14:28.871 fused_ordering(857) 00:14:28.871 fused_ordering(858) 00:14:28.871 fused_ordering(859) 00:14:28.871 fused_ordering(860) 00:14:28.871 fused_ordering(861) 00:14:28.871 fused_ordering(862) 00:14:28.871 fused_ordering(863) 00:14:28.871 fused_ordering(864) 00:14:28.871 fused_ordering(865) 00:14:28.871 fused_ordering(866) 00:14:28.871 fused_ordering(867) 00:14:28.871 fused_ordering(868) 00:14:28.871 fused_ordering(869) 00:14:28.871 fused_ordering(870) 00:14:28.871 fused_ordering(871) 00:14:28.871 fused_ordering(872) 00:14:28.871 fused_ordering(873) 00:14:28.871 fused_ordering(874) 00:14:28.871 fused_ordering(875) 00:14:28.871 fused_ordering(876) 00:14:28.871 fused_ordering(877) 00:14:28.871 fused_ordering(878) 00:14:28.871 fused_ordering(879) 00:14:28.871 fused_ordering(880) 00:14:28.871 fused_ordering(881) 00:14:28.871 fused_ordering(882) 00:14:28.871 fused_ordering(883) 00:14:28.871 fused_ordering(884) 00:14:28.871 fused_ordering(885) 00:14:28.871 fused_ordering(886) 00:14:28.871 fused_ordering(887) 00:14:28.871 fused_ordering(888) 00:14:28.871 fused_ordering(889) 00:14:28.871 fused_ordering(890) 00:14:28.871 fused_ordering(891) 00:14:28.871 fused_ordering(892) 00:14:28.871 fused_ordering(893) 00:14:28.871 fused_ordering(894) 00:14:28.871 fused_ordering(895) 00:14:28.871 fused_ordering(896) 00:14:28.871 fused_ordering(897) 00:14:28.871 fused_ordering(898) 00:14:28.871 fused_ordering(899) 00:14:28.871 fused_ordering(900) 00:14:28.871 fused_ordering(901) 00:14:28.871 fused_ordering(902) 00:14:28.871 fused_ordering(903) 00:14:28.871 fused_ordering(904) 00:14:28.871 fused_ordering(905) 00:14:28.871 fused_ordering(906) 00:14:28.871 fused_ordering(907) 00:14:28.871 fused_ordering(908) 00:14:28.871 fused_ordering(909) 00:14:28.871 fused_ordering(910) 00:14:28.871 fused_ordering(911) 00:14:28.871 fused_ordering(912) 00:14:28.871 fused_ordering(913) 00:14:28.871 fused_ordering(914) 00:14:28.871 fused_ordering(915) 00:14:28.871 fused_ordering(916) 00:14:28.871 fused_ordering(917) 00:14:28.871 fused_ordering(918) 00:14:28.871 fused_ordering(919) 00:14:28.871 fused_ordering(920) 00:14:28.871 fused_ordering(921) 00:14:28.871 fused_ordering(922) 00:14:28.871 fused_ordering(923) 00:14:28.871 fused_ordering(924) 00:14:28.871 fused_ordering(925) 00:14:28.871 fused_ordering(926) 00:14:28.871 fused_ordering(927) 00:14:28.871 fused_ordering(928) 00:14:28.871 fused_ordering(929) 00:14:28.871 fused_ordering(930) 00:14:28.871 fused_ordering(931) 00:14:28.871 fused_ordering(932) 00:14:28.871 fused_ordering(933) 00:14:28.871 fused_ordering(934) 00:14:28.871 fused_ordering(935) 00:14:28.871 fused_ordering(936) 00:14:28.871 fused_ordering(937) 00:14:28.871 fused_ordering(938) 00:14:28.871 fused_ordering(939) 00:14:28.871 fused_ordering(940) 00:14:28.871 fused_ordering(941) 00:14:28.871 fused_ordering(942) 00:14:28.871 fused_ordering(943) 00:14:28.871 fused_ordering(944) 00:14:28.871 fused_ordering(945) 00:14:28.871 fused_ordering(946) 00:14:28.871 fused_ordering(947) 00:14:28.871 fused_ordering(948) 00:14:28.871 fused_ordering(949) 00:14:28.871 fused_ordering(950) 00:14:28.871 fused_ordering(951) 00:14:28.871 fused_ordering(952) 00:14:28.871 fused_ordering(953) 00:14:28.871 fused_ordering(954) 00:14:28.871 fused_ordering(955) 00:14:28.871 fused_ordering(956) 00:14:28.871 fused_ordering(957) 00:14:28.871 fused_ordering(958) 00:14:28.871 fused_ordering(959) 00:14:28.871 fused_ordering(960) 00:14:28.871 fused_ordering(961) 00:14:28.871 fused_ordering(962) 00:14:28.871 fused_ordering(963) 00:14:28.871 fused_ordering(964) 00:14:28.871 fused_ordering(965) 00:14:28.871 fused_ordering(966) 00:14:28.871 fused_ordering(967) 00:14:28.871 fused_ordering(968) 00:14:28.871 fused_ordering(969) 00:14:28.871 fused_ordering(970) 00:14:28.871 fused_ordering(971) 00:14:28.871 fused_ordering(972) 00:14:28.871 fused_ordering(973) 00:14:28.871 fused_ordering(974) 00:14:28.871 fused_ordering(975) 00:14:28.871 fused_ordering(976) 00:14:28.871 fused_ordering(977) 00:14:28.871 fused_ordering(978) 00:14:28.871 fused_ordering(979) 00:14:28.871 fused_ordering(980) 00:14:28.871 fused_ordering(981) 00:14:28.871 fused_ordering(982) 00:14:28.871 fused_ordering(983) 00:14:28.871 fused_ordering(984) 00:14:28.871 fused_ordering(985) 00:14:28.871 fused_ordering(986) 00:14:28.871 fused_ordering(987) 00:14:28.871 fused_ordering(988) 00:14:28.871 fused_ordering(989) 00:14:28.871 fused_ordering(990) 00:14:28.871 fused_ordering(991) 00:14:28.871 fused_ordering(992) 00:14:28.871 fused_ordering(993) 00:14:28.871 fused_ordering(994) 00:14:28.871 fused_ordering(995) 00:14:28.871 fused_ordering(996) 00:14:28.871 fused_ordering(997) 00:14:28.871 fused_ordering(998) 00:14:28.871 fused_ordering(999) 00:14:28.871 fused_ordering(1000) 00:14:28.871 fused_ordering(1001) 00:14:28.871 fused_ordering(1002) 00:14:28.871 fused_ordering(1003) 00:14:28.871 fused_ordering(1004) 00:14:28.871 fused_ordering(1005) 00:14:28.871 fused_ordering(1006) 00:14:28.871 fused_ordering(1007) 00:14:28.871 fused_ordering(1008) 00:14:28.871 fused_ordering(1009) 00:14:28.871 fused_ordering(1010) 00:14:28.871 fused_ordering(1011) 00:14:28.871 fused_ordering(1012) 00:14:28.871 fused_ordering(1013) 00:14:28.871 fused_ordering(1014) 00:14:28.871 fused_ordering(1015) 00:14:28.871 fused_ordering(1016) 00:14:28.871 fused_ordering(1017) 00:14:28.871 fused_ordering(1018) 00:14:28.871 fused_ordering(1019) 00:14:28.871 fused_ordering(1020) 00:14:28.871 fused_ordering(1021) 00:14:28.871 fused_ordering(1022) 00:14:28.871 fused_ordering(1023) 00:14:28.871 02:01:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:28.871 02:01:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:28.871 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:28.871 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:28.871 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:28.871 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:28.871 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:28.871 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:28.871 rmmod nvme_tcp 00:14:28.872 rmmod nvme_fabrics 00:14:28.872 rmmod nvme_keyring 00:14:28.872 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:28.872 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:28.872 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:28.872 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1538525 ']' 00:14:28.872 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1538525 00:14:28.872 02:01:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1538525 ']' 00:14:28.872 02:01:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1538525 00:14:28.872 02:01:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:14:28.872 02:01:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:28.872 02:01:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1538525 00:14:28.872 02:01:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:28.872 02:01:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:28.872 02:01:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1538525' 00:14:28.872 killing process with pid 1538525 00:14:28.872 02:01:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1538525 00:14:28.872 02:01:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1538525 00:14:29.131 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:29.131 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:29.131 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:29.131 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:29.131 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:29.131 02:01:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.131 02:01:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.131 02:01:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.671 02:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:31.671 00:14:31.671 real 0m8.414s 00:14:31.671 user 0m5.920s 00:14:31.671 sys 0m4.269s 00:14:31.671 02:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:31.671 02:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.671 ************************************ 00:14:31.671 END TEST nvmf_fused_ordering 00:14:31.671 ************************************ 00:14:31.671 02:01:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:31.671 02:01:36 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:31.671 02:01:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:31.671 02:01:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.671 02:01:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:31.671 ************************************ 00:14:31.671 START TEST nvmf_delete_subsystem 00:14:31.671 ************************************ 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:31.671 * Looking for test storage... 00:14:31.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:31.671 02:01:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.575 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:33.575 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:33.575 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:33.575 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:33.575 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:33.575 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:33.575 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:33.575 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:33.575 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:33.575 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:33.575 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:33.576 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:33.576 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:33.576 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:33.576 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:33.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:14:33.576 00:14:33.576 --- 10.0.0.2 ping statistics --- 00:14:33.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.576 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:33.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:14:33.576 00:14:33.576 --- 10.0.0.1 ping statistics --- 00:14:33.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.576 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1540972 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1540972 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1540972 ']' 00:14:33.576 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.577 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.577 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.577 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.577 02:01:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.577 [2024-07-14 02:01:38.971089] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:33.577 [2024-07-14 02:01:38.971189] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.577 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.577 [2024-07-14 02:01:39.041804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:33.577 [2024-07-14 02:01:39.133547] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.577 [2024-07-14 02:01:39.133608] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.577 [2024-07-14 02:01:39.133624] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.577 [2024-07-14 02:01:39.133637] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.577 [2024-07-14 02:01:39.133649] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.577 [2024-07-14 02:01:39.133718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.577 [2024-07-14 02:01:39.133723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.577 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.577 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:14:33.577 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:33.577 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:33.577 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.836 [2024-07-14 02:01:39.274252] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.836 [2024-07-14 02:01:39.290466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.836 NULL1 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.836 Delay0 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1540996 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:33.836 02:01:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:33.836 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.836 [2024-07-14 02:01:39.365186] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:35.742 02:01:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:35.742 02:01:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.742 02:01:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 [2024-07-14 02:01:41.457204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55ce0 is same with the state(5) to be set 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Write completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 starting I/O failed: -6 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.002 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 starting I/O failed: -6 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 starting I/O failed: -6 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 starting I/O failed: -6 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 starting I/O failed: -6 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 starting I/O failed: -6 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 starting I/O failed: -6 00:14:36.003 [2024-07-14 02:01:41.458010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3b60000c00 is same with the state(5) to be set 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Write completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.003 Read completed with error (sct=0, sc=8) 00:14:36.940 [2024-07-14 02:01:42.426543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b72630 is same with the state(5) to be set 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 [2024-07-14 02:01:42.455958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55b00 is same with the state(5) to be set 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 [2024-07-14 02:01:42.457105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55ec0 is same with the state(5) to be set 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Write completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.940 Read completed with error (sct=0, sc=8) 00:14:36.941 Write completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Write completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Write completed with error (sct=0, sc=8) 00:14:36.941 Write completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Write completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Write completed with error (sct=0, sc=8) 00:14:36.941 [2024-07-14 02:01:42.459029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3b6000d600 is same with the state(5) to be set 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Write completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Write completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Write completed with error (sct=0, sc=8) 00:14:36.941 Write completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 Read completed with error (sct=0, sc=8) 00:14:36.941 [2024-07-14 02:01:42.459824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3b6000cfe0 is same with the state(5) to be set 00:14:36.941 Initializing NVMe Controllers 00:14:36.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:36.941 Controller IO queue size 128, less than required. 00:14:36.941 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:36.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:36.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:36.941 Initialization complete. Launching workers. 00:14:36.941 ======================================================== 00:14:36.941 Latency(us) 00:14:36.941 Device Information : IOPS MiB/s Average min max 00:14:36.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.77 0.08 911199.38 675.07 1012875.40 00:14:36.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.23 0.08 897409.20 436.08 1013235.64 00:14:36.941 ======================================================== 00:14:36.941 Total : 332.00 0.16 904170.31 436.08 1013235.64 00:14:36.941 00:14:36.941 [2024-07-14 02:01:42.460404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b72630 (9): Bad file descriptor 00:14:36.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:36.941 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.941 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:36.941 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1540996 00:14:36.941 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1540996 00:14:37.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1540996) - No such process 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1540996 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1540996 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1540996 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:37.510 [2024-07-14 02:01:42.984676] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.510 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1541518 00:14:37.511 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:37.511 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:37.511 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1541518 00:14:37.511 02:01:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:37.511 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.511 [2024-07-14 02:01:43.048536] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:38.077 02:01:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:38.077 02:01:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1541518 00:14:38.077 02:01:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:38.336 02:01:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:38.336 02:01:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1541518 00:14:38.336 02:01:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:38.903 02:01:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:38.903 02:01:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1541518 00:14:38.903 02:01:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:39.469 02:01:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:39.469 02:01:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1541518 00:14:39.469 02:01:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:40.038 02:01:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:40.038 02:01:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1541518 00:14:40.038 02:01:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:40.634 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:40.634 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1541518 00:14:40.634 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:40.634 Initializing NVMe Controllers 00:14:40.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:40.634 Controller IO queue size 128, less than required. 00:14:40.634 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:40.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:40.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:40.634 Initialization complete. Launching workers. 00:14:40.634 ======================================================== 00:14:40.634 Latency(us) 00:14:40.634 Device Information : IOPS MiB/s Average min max 00:14:40.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003410.96 1000241.03 1012202.02 00:14:40.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004472.64 1000277.21 1042838.77 00:14:40.634 ======================================================== 00:14:40.634 Total : 256.00 0.12 1003941.80 1000241.03 1042838.77 00:14:40.634 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1541518 00:14:40.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1541518) - No such process 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1541518 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:40.893 rmmod nvme_tcp 00:14:40.893 rmmod nvme_fabrics 00:14:40.893 rmmod nvme_keyring 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1540972 ']' 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1540972 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1540972 ']' 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1540972 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:40.893 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1540972 00:14:41.151 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:41.151 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:41.151 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1540972' 00:14:41.151 killing process with pid 1540972 00:14:41.151 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1540972 00:14:41.151 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1540972 00:14:41.151 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:41.151 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:41.151 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:41.151 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.151 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:41.151 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.151 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.151 02:01:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.684 02:01:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:43.684 00:14:43.684 real 0m12.075s 00:14:43.684 user 0m27.630s 00:14:43.684 sys 0m2.895s 00:14:43.684 02:01:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:43.684 02:01:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.684 ************************************ 00:14:43.684 END TEST nvmf_delete_subsystem 00:14:43.684 ************************************ 00:14:43.684 02:01:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:43.685 02:01:48 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:43.685 02:01:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:43.685 02:01:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.685 02:01:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:43.685 ************************************ 00:14:43.685 START TEST nvmf_ns_masking 00:14:43.685 ************************************ 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:43.685 * Looking for test storage... 00:14:43.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4a75b7a4-796c-4f0d-aef7-05f92f1078d5 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2ef21a7b-f797-4a40-af3d-fc4726aa2d2e 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=34234f65-8891-44f6-a90b-67975027abde 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.685 02:01:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.685 02:01:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:43.685 02:01:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:43.685 02:01:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:43.685 02:01:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:45.591 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:45.591 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:45.591 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:45.591 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:45.591 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:45.591 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:45.591 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:45.592 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:45.592 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:45.592 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:45.592 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:45.592 02:01:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:45.592 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:45.592 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:45.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:14:45.593 00:14:45.593 --- 10.0.0.2 ping statistics --- 00:14:45.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.593 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:45.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:14:45.593 00:14:45.593 --- 10.0.0.1 ping statistics --- 00:14:45.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.593 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1543866 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1543866 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1543866 ']' 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:45.593 02:01:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:45.593 [2024-07-14 02:01:51.093960] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:45.593 [2024-07-14 02:01:51.094038] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.593 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.593 [2024-07-14 02:01:51.180157] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.852 [2024-07-14 02:01:51.290047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.852 [2024-07-14 02:01:51.290114] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.852 [2024-07-14 02:01:51.290139] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.852 [2024-07-14 02:01:51.290161] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.852 [2024-07-14 02:01:51.290181] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.852 [2024-07-14 02:01:51.290223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.852 02:01:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.852 02:01:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:45.852 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.852 02:01:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:45.852 02:01:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:45.852 02:01:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.852 02:01:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:46.110 [2024-07-14 02:01:51.722607] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.110 02:01:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:46.110 02:01:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:46.110 02:01:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:46.371 Malloc1 00:14:46.632 02:01:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:46.890 Malloc2 00:14:46.890 02:01:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:47.149 02:01:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:47.149 02:01:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.408 [2024-07-14 02:01:53.062960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.408 02:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:47.408 02:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 34234f65-8891-44f6-a90b-67975027abde -a 10.0.0.2 -s 4420 -i 4 00:14:47.668 02:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:47.668 02:01:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:47.668 02:01:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:47.668 02:01:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:47.668 02:01:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:49.572 02:01:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:49.572 02:01:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:49.572 02:01:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:49.572 02:01:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:49.572 02:01:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:49.572 02:01:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:49.572 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:49.572 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:49.831 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:49.831 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:49.831 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:49.831 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:49.831 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:49.831 [ 0]:0x1 00:14:49.831 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:49.831 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:49.831 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=36e5a53b1ddf4a53a3047dd05b80cc55 00:14:49.831 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 36e5a53b1ddf4a53a3047dd05b80cc55 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.831 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:50.089 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:50.090 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:50.090 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:50.090 [ 0]:0x1 00:14:50.090 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:50.090 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:50.090 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=36e5a53b1ddf4a53a3047dd05b80cc55 00:14:50.090 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 36e5a53b1ddf4a53a3047dd05b80cc55 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:50.090 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:50.090 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:50.090 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:50.090 [ 1]:0x2 00:14:50.090 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:50.090 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:50.090 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a2acc1c9d734d52bcb3ba9f13941671 00:14:50.090 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a2acc1c9d734d52bcb3ba9f13941671 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:50.090 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:50.090 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:50.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.090 02:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.348 02:01:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:50.608 02:01:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:50.608 02:01:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 34234f65-8891-44f6-a90b-67975027abde -a 10.0.0.2 -s 4420 -i 4 00:14:50.868 02:01:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:50.868 02:01:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:50.868 02:01:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:50.868 02:01:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:50.868 02:01:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:50.868 02:01:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:53.405 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:53.405 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:53.405 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:53.405 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:53.405 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:53.405 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:53.406 [ 0]:0x2 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a2acc1c9d734d52bcb3ba9f13941671 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a2acc1c9d734d52bcb3ba9f13941671 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:53.406 [ 0]:0x1 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=36e5a53b1ddf4a53a3047dd05b80cc55 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 36e5a53b1ddf4a53a3047dd05b80cc55 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:53.406 [ 1]:0x2 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a2acc1c9d734d52bcb3ba9f13941671 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a2acc1c9d734d52bcb3ba9f13941671 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.406 02:01:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:53.666 [ 0]:0x2 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a2acc1c9d734d52bcb3ba9f13941671 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a2acc1c9d734d52bcb3ba9f13941671 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:53.666 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:53.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.926 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:53.926 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:53.926 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 34234f65-8891-44f6-a90b-67975027abde -a 10.0.0.2 -s 4420 -i 4 00:14:54.185 02:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:54.185 02:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:54.185 02:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:54.185 02:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:54.185 02:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:54.185 02:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:56.726 [ 0]:0x1 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=36e5a53b1ddf4a53a3047dd05b80cc55 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 36e5a53b1ddf4a53a3047dd05b80cc55 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:56.726 [ 1]:0x2 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a2acc1c9d734d52bcb3ba9f13941671 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a2acc1c9d734d52bcb3ba9f13941671 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.726 02:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:56.726 [ 0]:0x2 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a2acc1c9d734d52bcb3ba9f13941671 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a2acc1c9d734d52bcb3ba9f13941671 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.726 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:56.727 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:56.986 [2024-07-14 02:02:02.507751] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:56.986 request: 00:14:56.986 { 00:14:56.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.986 "nsid": 2, 00:14:56.986 "host": "nqn.2016-06.io.spdk:host1", 00:14:56.986 "method": "nvmf_ns_remove_host", 00:14:56.986 "req_id": 1 00:14:56.986 } 00:14:56.986 Got JSON-RPC error response 00:14:56.986 response: 00:14:56.986 { 00:14:56.986 "code": -32602, 00:14:56.986 "message": "Invalid parameters" 00:14:56.986 } 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:56.986 [ 0]:0x2 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a2acc1c9d734d52bcb3ba9f13941671 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a2acc1c9d734d52bcb3ba9f13941671 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:56.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1545353 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1545353 /var/tmp/host.sock 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1545353 ']' 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:56.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.986 02:02:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:57.244 [2024-07-14 02:02:02.706363] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:57.244 [2024-07-14 02:02:02.706447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545353 ] 00:14:57.244 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.244 [2024-07-14 02:02:02.770780] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.244 [2024-07-14 02:02:02.862078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.502 02:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.502 02:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:57.502 02:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.767 02:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:58.025 02:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4a75b7a4-796c-4f0d-aef7-05f92f1078d5 00:14:58.025 02:02:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:58.025 02:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4A75B7A4796C4F0DAEF705F92F1078D5 -i 00:14:58.316 02:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2ef21a7b-f797-4a40-af3d-fc4726aa2d2e 00:14:58.316 02:02:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:58.316 02:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2EF21A7BF7974A40AF3DFC4726AA2D2E -i 00:14:58.574 02:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:58.832 02:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:59.090 02:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:59.090 02:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:59.660 nvme0n1 00:14:59.660 02:02:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:59.660 02:02:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:59.919 nvme1n2 00:14:59.920 02:02:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:59.920 02:02:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:59.920 02:02:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:59.920 02:02:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:59.920 02:02:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:00.178 02:02:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:00.178 02:02:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:00.178 02:02:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:00.178 02:02:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:00.436 02:02:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4a75b7a4-796c-4f0d-aef7-05f92f1078d5 == \4\a\7\5\b\7\a\4\-\7\9\6\c\-\4\f\0\d\-\a\e\f\7\-\0\5\f\9\2\f\1\0\7\8\d\5 ]] 00:15:00.436 02:02:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:00.436 02:02:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:00.436 02:02:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:00.696 02:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2ef21a7b-f797-4a40-af3d-fc4726aa2d2e == \2\e\f\2\1\a\7\b\-\f\7\9\7\-\4\a\4\0\-\a\f\3\d\-\f\c\4\7\2\6\a\a\2\d\2\e ]] 00:15:00.696 02:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1545353 00:15:00.696 02:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1545353 ']' 00:15:00.696 02:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1545353 00:15:00.696 02:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:00.696 02:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:00.696 02:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1545353 00:15:00.696 02:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:00.696 02:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:00.696 02:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1545353' 00:15:00.696 killing process with pid 1545353 00:15:00.696 02:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1545353 00:15:00.696 02:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1545353 00:15:00.955 02:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.521 02:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:15:01.521 02:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:15:01.521 02:02:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:01.521 02:02:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:01.521 02:02:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:01.521 02:02:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:01.521 02:02:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:01.521 02:02:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:01.521 rmmod nvme_tcp 00:15:01.521 rmmod nvme_fabrics 00:15:01.521 rmmod nvme_keyring 00:15:01.521 02:02:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:01.521 02:02:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:01.521 02:02:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:01.521 02:02:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1543866 ']' 00:15:01.521 02:02:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1543866 00:15:01.521 02:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1543866 ']' 00:15:01.521 02:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1543866 00:15:01.521 02:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:01.521 02:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.521 02:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1543866 00:15:01.521 02:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:01.521 02:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:01.521 02:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1543866' 00:15:01.521 killing process with pid 1543866 00:15:01.521 02:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1543866 00:15:01.521 02:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1543866 00:15:01.781 02:02:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:01.781 02:02:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:01.781 02:02:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:01.781 02:02:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:01.781 02:02:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:01.781 02:02:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.781 02:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.781 02:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.689 02:02:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:03.689 00:15:03.689 real 0m20.401s 00:15:03.689 user 0m26.511s 00:15:03.689 sys 0m4.059s 00:15:03.689 02:02:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:03.689 02:02:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:03.689 ************************************ 00:15:03.689 END TEST nvmf_ns_masking 00:15:03.689 ************************************ 00:15:03.689 02:02:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:03.689 02:02:09 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:03.689 02:02:09 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:03.689 02:02:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:03.689 02:02:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.689 02:02:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:03.689 ************************************ 00:15:03.689 START TEST nvmf_nvme_cli 00:15:03.689 ************************************ 00:15:03.689 02:02:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:03.948 * Looking for test storage... 00:15:03.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.948 02:02:09 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:03.949 02:02:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:05.853 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:05.854 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:05.854 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:05.854 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:05.854 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:05.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:15:05.854 00:15:05.854 --- 10.0.0.2 ping statistics --- 00:15:05.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.854 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:05.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:15:05.854 00:15:05.854 --- 10.0.0.1 ping statistics --- 00:15:05.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.854 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:05.854 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:06.113 02:02:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:06.113 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:06.113 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:06.113 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.113 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1547841 00:15:06.113 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:06.113 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1547841 00:15:06.113 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1547841 ']' 00:15:06.113 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.113 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.113 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.113 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.113 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.113 [2024-07-14 02:02:11.615699] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:06.113 [2024-07-14 02:02:11.615789] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.113 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.113 [2024-07-14 02:02:11.680515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:06.113 [2024-07-14 02:02:11.766895] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.113 [2024-07-14 02:02:11.766944] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.113 [2024-07-14 02:02:11.766960] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.113 [2024-07-14 02:02:11.766972] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.113 [2024-07-14 02:02:11.766983] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.113 [2024-07-14 02:02:11.767036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.113 [2024-07-14 02:02:11.767064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.113 [2024-07-14 02:02:11.767095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:06.113 [2024-07-14 02:02:11.767097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.371 [2024-07-14 02:02:11.922756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.371 Malloc0 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.371 Malloc1 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.371 02:02:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.371 02:02:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.372 02:02:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.372 02:02:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.372 02:02:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.372 [2024-07-14 02:02:12.008579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.372 02:02:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.372 02:02:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:06.372 02:02:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.372 02:02:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.372 02:02:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.372 02:02:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:06.630 00:15:06.630 Discovery Log Number of Records 2, Generation counter 2 00:15:06.630 =====Discovery Log Entry 0====== 00:15:06.630 trtype: tcp 00:15:06.630 adrfam: ipv4 00:15:06.630 subtype: current discovery subsystem 00:15:06.630 treq: not required 00:15:06.630 portid: 0 00:15:06.630 trsvcid: 4420 00:15:06.630 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:06.630 traddr: 10.0.0.2 00:15:06.630 eflags: explicit discovery connections, duplicate discovery information 00:15:06.630 sectype: none 00:15:06.630 =====Discovery Log Entry 1====== 00:15:06.630 trtype: tcp 00:15:06.630 adrfam: ipv4 00:15:06.630 subtype: nvme subsystem 00:15:06.630 treq: not required 00:15:06.630 portid: 0 00:15:06.630 trsvcid: 4420 00:15:06.630 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:06.630 traddr: 10.0.0.2 00:15:06.630 eflags: none 00:15:06.630 sectype: none 00:15:06.630 02:02:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:06.630 02:02:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:06.630 02:02:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:06.630 02:02:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:06.630 02:02:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:06.630 02:02:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:06.630 02:02:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:06.630 02:02:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:06.630 02:02:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:06.630 02:02:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:06.630 02:02:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:07.198 02:02:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:07.198 02:02:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:07.198 02:02:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:07.198 02:02:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:07.198 02:02:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:07.198 02:02:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:09.099 /dev/nvme0n1 ]] 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:09.099 02:02:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:09.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:09.359 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:09.360 rmmod nvme_tcp 00:15:09.360 rmmod nvme_fabrics 00:15:09.360 rmmod nvme_keyring 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1547841 ']' 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1547841 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1547841 ']' 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1547841 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1547841 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1547841' 00:15:09.360 killing process with pid 1547841 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1547841 00:15:09.360 02:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1547841 00:15:09.621 02:02:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:09.621 02:02:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:09.621 02:02:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:09.621 02:02:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:09.621 02:02:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:09.621 02:02:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.621 02:02:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.621 02:02:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.160 02:02:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:12.160 00:15:12.160 real 0m7.898s 00:15:12.160 user 0m14.247s 00:15:12.160 sys 0m2.131s 00:15:12.160 02:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:12.160 02:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.160 ************************************ 00:15:12.160 END TEST nvmf_nvme_cli 00:15:12.160 ************************************ 00:15:12.160 02:02:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:12.160 02:02:17 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:12.160 02:02:17 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:12.160 02:02:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:12.160 02:02:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:12.160 02:02:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:12.160 ************************************ 00:15:12.160 START TEST nvmf_vfio_user 00:15:12.160 ************************************ 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:12.160 * Looking for test storage... 00:15:12.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.160 02:02:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1548646 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1548646' 00:15:12.161 Process pid: 1548646 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1548646 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1548646 ']' 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:12.161 [2024-07-14 02:02:17.440530] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:12.161 [2024-07-14 02:02:17.440624] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.161 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.161 [2024-07-14 02:02:17.505738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.161 [2024-07-14 02:02:17.594207] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.161 [2024-07-14 02:02:17.594257] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.161 [2024-07-14 02:02:17.594271] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.161 [2024-07-14 02:02:17.594282] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.161 [2024-07-14 02:02:17.594298] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.161 [2024-07-14 02:02:17.594349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.161 [2024-07-14 02:02:17.594378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.161 [2024-07-14 02:02:17.594438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.161 [2024-07-14 02:02:17.594441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:12.161 02:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:13.097 02:02:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:13.355 02:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:13.355 02:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:13.355 02:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:13.355 02:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:13.355 02:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:13.614 Malloc1 00:15:13.614 02:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:13.874 02:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:14.132 02:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:14.391 02:02:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:14.391 02:02:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:14.391 02:02:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:14.659 Malloc2 00:15:14.659 02:02:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:14.971 02:02:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:15.230 02:02:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:15.491 02:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:15.491 02:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:15.491 02:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:15.491 02:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:15.491 02:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:15.491 02:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:15.491 [2024-07-14 02:02:21.067410] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:15.491 [2024-07-14 02:02:21.067461] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1549074 ] 00:15:15.491 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.491 [2024-07-14 02:02:21.103186] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:15.491 [2024-07-14 02:02:21.105751] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:15.491 [2024-07-14 02:02:21.105785] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f754beb6000 00:15:15.491 [2024-07-14 02:02:21.106746] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.491 [2024-07-14 02:02:21.107746] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.491 [2024-07-14 02:02:21.108749] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.491 [2024-07-14 02:02:21.109751] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:15.491 [2024-07-14 02:02:21.110757] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:15.491 [2024-07-14 02:02:21.111765] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.492 [2024-07-14 02:02:21.112770] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:15.492 [2024-07-14 02:02:21.113776] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.492 [2024-07-14 02:02:21.114781] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:15.492 [2024-07-14 02:02:21.114802] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f754ac6a000 00:15:15.492 [2024-07-14 02:02:21.115939] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:15.492 [2024-07-14 02:02:21.131537] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:15.492 [2024-07-14 02:02:21.131576] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:15.492 [2024-07-14 02:02:21.136930] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:15.492 [2024-07-14 02:02:21.136989] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:15.492 [2024-07-14 02:02:21.137090] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:15.492 [2024-07-14 02:02:21.137127] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:15.492 [2024-07-14 02:02:21.137138] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:15.492 [2024-07-14 02:02:21.137918] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:15.492 [2024-07-14 02:02:21.137940] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:15.492 [2024-07-14 02:02:21.137953] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:15.492 [2024-07-14 02:02:21.138923] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:15.492 [2024-07-14 02:02:21.138942] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:15.492 [2024-07-14 02:02:21.138957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:15.492 [2024-07-14 02:02:21.139930] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:15.492 [2024-07-14 02:02:21.139950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:15.492 [2024-07-14 02:02:21.140934] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:15.492 [2024-07-14 02:02:21.140953] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:15.492 [2024-07-14 02:02:21.140962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:15.492 [2024-07-14 02:02:21.140974] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:15.492 [2024-07-14 02:02:21.141084] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:15.492 [2024-07-14 02:02:21.141092] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:15.492 [2024-07-14 02:02:21.141100] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:15.492 [2024-07-14 02:02:21.141937] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:15.492 [2024-07-14 02:02:21.142945] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:15.492 [2024-07-14 02:02:21.143955] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:15.492 [2024-07-14 02:02:21.144948] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:15.492 [2024-07-14 02:02:21.145091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:15.492 [2024-07-14 02:02:21.145960] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:15.492 [2024-07-14 02:02:21.145978] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:15.492 [2024-07-14 02:02:21.145987] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:15.492 [2024-07-14 02:02:21.146012] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:15.492 [2024-07-14 02:02:21.146026] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:15.492 [2024-07-14 02:02:21.146056] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:15.492 [2024-07-14 02:02:21.146066] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:15.492 [2024-07-14 02:02:21.146090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:15.492 [2024-07-14 02:02:21.146177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:15.492 [2024-07-14 02:02:21.146198] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:15.492 [2024-07-14 02:02:21.146210] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:15.492 [2024-07-14 02:02:21.146218] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:15.492 [2024-07-14 02:02:21.146225] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:15.492 [2024-07-14 02:02:21.146233] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:15.492 [2024-07-14 02:02:21.146241] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:15.492 [2024-07-14 02:02:21.146248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:15.492 [2024-07-14 02:02:21.146261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:15.492 [2024-07-14 02:02:21.146277] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:15.492 [2024-07-14 02:02:21.146293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:15.492 [2024-07-14 02:02:21.146315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.492 [2024-07-14 02:02:21.146328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.492 [2024-07-14 02:02:21.146340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.492 [2024-07-14 02:02:21.146351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.492 [2024-07-14 02:02:21.146359] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:15.492 [2024-07-14 02:02:21.146374] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:15.492 [2024-07-14 02:02:21.146388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:15.492 [2024-07-14 02:02:21.146399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:15.492 [2024-07-14 02:02:21.146410] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:15.492 [2024-07-14 02:02:21.146419] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:15.492 [2024-07-14 02:02:21.146429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:15.492 [2024-07-14 02:02:21.146440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:15.492 [2024-07-14 02:02:21.146452] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:15.492 [2024-07-14 02:02:21.146466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:15.492 [2024-07-14 02:02:21.146530] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:15.492 [2024-07-14 02:02:21.146545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:15.492 [2024-07-14 02:02:21.146558] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:15.492 [2024-07-14 02:02:21.146566] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:15.492 [2024-07-14 02:02:21.146576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:15.492 [2024-07-14 02:02:21.146594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:15.492 [2024-07-14 02:02:21.146612] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:15.492 [2024-07-14 02:02:21.146635] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:15.492 [2024-07-14 02:02:21.146651] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:15.492 [2024-07-14 02:02:21.146662] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:15.492 [2024-07-14 02:02:21.146670] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:15.493 [2024-07-14 02:02:21.146679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:15.493 [2024-07-14 02:02:21.146705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:15.493 [2024-07-14 02:02:21.146729] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:15.493 [2024-07-14 02:02:21.146744] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:15.493 [2024-07-14 02:02:21.146755] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:15.493 [2024-07-14 02:02:21.146763] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:15.493 [2024-07-14 02:02:21.146772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:15.493 [2024-07-14 02:02:21.146788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:15.493 [2024-07-14 02:02:21.146802] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:15.493 [2024-07-14 02:02:21.146813] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:15.493 [2024-07-14 02:02:21.146827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:15.493 [2024-07-14 02:02:21.146838] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:15.493 [2024-07-14 02:02:21.146861] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:15.493 [2024-07-14 02:02:21.146878] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:15.493 [2024-07-14 02:02:21.146892] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:15.493 [2024-07-14 02:02:21.146900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:15.493 [2024-07-14 02:02:21.146909] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:15.493 [2024-07-14 02:02:21.146938] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:15.493 [2024-07-14 02:02:21.146958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:15.493 [2024-07-14 02:02:21.146978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:15.493 [2024-07-14 02:02:21.146990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:15.493 [2024-07-14 02:02:21.147007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:15.493 [2024-07-14 02:02:21.147019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:15.493 [2024-07-14 02:02:21.147036] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:15.493 [2024-07-14 02:02:21.147048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:15.493 [2024-07-14 02:02:21.147072] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:15.493 [2024-07-14 02:02:21.147082] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:15.493 [2024-07-14 02:02:21.147089] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:15.493 [2024-07-14 02:02:21.147095] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:15.493 [2024-07-14 02:02:21.147104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:15.493 [2024-07-14 02:02:21.147117] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:15.493 [2024-07-14 02:02:21.147125] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:15.493 [2024-07-14 02:02:21.147134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:15.493 [2024-07-14 02:02:21.147145] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:15.493 [2024-07-14 02:02:21.147168] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:15.493 [2024-07-14 02:02:21.147177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:15.493 [2024-07-14 02:02:21.147190] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:15.493 [2024-07-14 02:02:21.147197] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:15.493 [2024-07-14 02:02:21.147206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:15.493 [2024-07-14 02:02:21.147232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:15.493 [2024-07-14 02:02:21.147252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:15.493 [2024-07-14 02:02:21.147270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:15.493 [2024-07-14 02:02:21.147284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:15.493 ===================================================== 00:15:15.493 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:15.493 ===================================================== 00:15:15.493 Controller Capabilities/Features 00:15:15.493 ================================ 00:15:15.493 Vendor ID: 4e58 00:15:15.493 Subsystem Vendor ID: 4e58 00:15:15.493 Serial Number: SPDK1 00:15:15.493 Model Number: SPDK bdev Controller 00:15:15.493 Firmware Version: 24.09 00:15:15.493 Recommended Arb Burst: 6 00:15:15.493 IEEE OUI Identifier: 8d 6b 50 00:15:15.493 Multi-path I/O 00:15:15.493 May have multiple subsystem ports: Yes 00:15:15.493 May have multiple controllers: Yes 00:15:15.493 Associated with SR-IOV VF: No 00:15:15.493 Max Data Transfer Size: 131072 00:15:15.493 Max Number of Namespaces: 32 00:15:15.493 Max Number of I/O Queues: 127 00:15:15.493 NVMe Specification Version (VS): 1.3 00:15:15.493 NVMe Specification Version (Identify): 1.3 00:15:15.493 Maximum Queue Entries: 256 00:15:15.493 Contiguous Queues Required: Yes 00:15:15.493 Arbitration Mechanisms Supported 00:15:15.493 Weighted Round Robin: Not Supported 00:15:15.493 Vendor Specific: Not Supported 00:15:15.493 Reset Timeout: 15000 ms 00:15:15.493 Doorbell Stride: 4 bytes 00:15:15.493 NVM Subsystem Reset: Not Supported 00:15:15.493 Command Sets Supported 00:15:15.493 NVM Command Set: Supported 00:15:15.493 Boot Partition: Not Supported 00:15:15.493 Memory Page Size Minimum: 4096 bytes 00:15:15.493 Memory Page Size Maximum: 4096 bytes 00:15:15.493 Persistent Memory Region: Not Supported 00:15:15.493 Optional Asynchronous Events Supported 00:15:15.493 Namespace Attribute Notices: Supported 00:15:15.493 Firmware Activation Notices: Not Supported 00:15:15.493 ANA Change Notices: Not Supported 00:15:15.493 PLE Aggregate Log Change Notices: Not Supported 00:15:15.493 LBA Status Info Alert Notices: Not Supported 00:15:15.493 EGE Aggregate Log Change Notices: Not Supported 00:15:15.493 Normal NVM Subsystem Shutdown event: Not Supported 00:15:15.493 Zone Descriptor Change Notices: Not Supported 00:15:15.493 Discovery Log Change Notices: Not Supported 00:15:15.493 Controller Attributes 00:15:15.493 128-bit Host Identifier: Supported 00:15:15.493 Non-Operational Permissive Mode: Not Supported 00:15:15.493 NVM Sets: Not Supported 00:15:15.493 Read Recovery Levels: Not Supported 00:15:15.493 Endurance Groups: Not Supported 00:15:15.493 Predictable Latency Mode: Not Supported 00:15:15.493 Traffic Based Keep ALive: Not Supported 00:15:15.493 Namespace Granularity: Not Supported 00:15:15.493 SQ Associations: Not Supported 00:15:15.493 UUID List: Not Supported 00:15:15.493 Multi-Domain Subsystem: Not Supported 00:15:15.493 Fixed Capacity Management: Not Supported 00:15:15.493 Variable Capacity Management: Not Supported 00:15:15.493 Delete Endurance Group: Not Supported 00:15:15.493 Delete NVM Set: Not Supported 00:15:15.493 Extended LBA Formats Supported: Not Supported 00:15:15.493 Flexible Data Placement Supported: Not Supported 00:15:15.493 00:15:15.493 Controller Memory Buffer Support 00:15:15.493 ================================ 00:15:15.493 Supported: No 00:15:15.493 00:15:15.493 Persistent Memory Region Support 00:15:15.493 ================================ 00:15:15.493 Supported: No 00:15:15.493 00:15:15.493 Admin Command Set Attributes 00:15:15.493 ============================ 00:15:15.493 Security Send/Receive: Not Supported 00:15:15.493 Format NVM: Not Supported 00:15:15.493 Firmware Activate/Download: Not Supported 00:15:15.493 Namespace Management: Not Supported 00:15:15.493 Device Self-Test: Not Supported 00:15:15.493 Directives: Not Supported 00:15:15.493 NVMe-MI: Not Supported 00:15:15.494 Virtualization Management: Not Supported 00:15:15.494 Doorbell Buffer Config: Not Supported 00:15:15.494 Get LBA Status Capability: Not Supported 00:15:15.494 Command & Feature Lockdown Capability: Not Supported 00:15:15.494 Abort Command Limit: 4 00:15:15.494 Async Event Request Limit: 4 00:15:15.494 Number of Firmware Slots: N/A 00:15:15.494 Firmware Slot 1 Read-Only: N/A 00:15:15.494 Firmware Activation Without Reset: N/A 00:15:15.494 Multiple Update Detection Support: N/A 00:15:15.494 Firmware Update Granularity: No Information Provided 00:15:15.494 Per-Namespace SMART Log: No 00:15:15.494 Asymmetric Namespace Access Log Page: Not Supported 00:15:15.494 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:15.494 Command Effects Log Page: Supported 00:15:15.494 Get Log Page Extended Data: Supported 00:15:15.494 Telemetry Log Pages: Not Supported 00:15:15.494 Persistent Event Log Pages: Not Supported 00:15:15.494 Supported Log Pages Log Page: May Support 00:15:15.494 Commands Supported & Effects Log Page: Not Supported 00:15:15.494 Feature Identifiers & Effects Log Page:May Support 00:15:15.494 NVMe-MI Commands & Effects Log Page: May Support 00:15:15.494 Data Area 4 for Telemetry Log: Not Supported 00:15:15.494 Error Log Page Entries Supported: 128 00:15:15.494 Keep Alive: Supported 00:15:15.494 Keep Alive Granularity: 10000 ms 00:15:15.494 00:15:15.494 NVM Command Set Attributes 00:15:15.494 ========================== 00:15:15.494 Submission Queue Entry Size 00:15:15.494 Max: 64 00:15:15.494 Min: 64 00:15:15.494 Completion Queue Entry Size 00:15:15.494 Max: 16 00:15:15.494 Min: 16 00:15:15.494 Number of Namespaces: 32 00:15:15.494 Compare Command: Supported 00:15:15.494 Write Uncorrectable Command: Not Supported 00:15:15.494 Dataset Management Command: Supported 00:15:15.494 Write Zeroes Command: Supported 00:15:15.494 Set Features Save Field: Not Supported 00:15:15.494 Reservations: Not Supported 00:15:15.494 Timestamp: Not Supported 00:15:15.494 Copy: Supported 00:15:15.494 Volatile Write Cache: Present 00:15:15.494 Atomic Write Unit (Normal): 1 00:15:15.494 Atomic Write Unit (PFail): 1 00:15:15.494 Atomic Compare & Write Unit: 1 00:15:15.494 Fused Compare & Write: Supported 00:15:15.494 Scatter-Gather List 00:15:15.494 SGL Command Set: Supported (Dword aligned) 00:15:15.494 SGL Keyed: Not Supported 00:15:15.494 SGL Bit Bucket Descriptor: Not Supported 00:15:15.494 SGL Metadata Pointer: Not Supported 00:15:15.494 Oversized SGL: Not Supported 00:15:15.494 SGL Metadata Address: Not Supported 00:15:15.494 SGL Offset: Not Supported 00:15:15.494 Transport SGL Data Block: Not Supported 00:15:15.494 Replay Protected Memory Block: Not Supported 00:15:15.494 00:15:15.494 Firmware Slot Information 00:15:15.494 ========================= 00:15:15.494 Active slot: 1 00:15:15.494 Slot 1 Firmware Revision: 24.09 00:15:15.494 00:15:15.494 00:15:15.494 Commands Supported and Effects 00:15:15.494 ============================== 00:15:15.494 Admin Commands 00:15:15.494 -------------- 00:15:15.494 Get Log Page (02h): Supported 00:15:15.494 Identify (06h): Supported 00:15:15.494 Abort (08h): Supported 00:15:15.494 Set Features (09h): Supported 00:15:15.494 Get Features (0Ah): Supported 00:15:15.494 Asynchronous Event Request (0Ch): Supported 00:15:15.494 Keep Alive (18h): Supported 00:15:15.494 I/O Commands 00:15:15.494 ------------ 00:15:15.494 Flush (00h): Supported LBA-Change 00:15:15.494 Write (01h): Supported LBA-Change 00:15:15.494 Read (02h): Supported 00:15:15.494 Compare (05h): Supported 00:15:15.494 Write Zeroes (08h): Supported LBA-Change 00:15:15.494 Dataset Management (09h): Supported LBA-Change 00:15:15.494 Copy (19h): Supported LBA-Change 00:15:15.494 00:15:15.494 Error Log 00:15:15.494 ========= 00:15:15.494 00:15:15.494 Arbitration 00:15:15.494 =========== 00:15:15.494 Arbitration Burst: 1 00:15:15.494 00:15:15.494 Power Management 00:15:15.494 ================ 00:15:15.494 Number of Power States: 1 00:15:15.494 Current Power State: Power State #0 00:15:15.494 Power State #0: 00:15:15.494 Max Power: 0.00 W 00:15:15.494 Non-Operational State: Operational 00:15:15.494 Entry Latency: Not Reported 00:15:15.494 Exit Latency: Not Reported 00:15:15.494 Relative Read Throughput: 0 00:15:15.494 Relative Read Latency: 0 00:15:15.494 Relative Write Throughput: 0 00:15:15.494 Relative Write Latency: 0 00:15:15.494 Idle Power: Not Reported 00:15:15.494 Active Power: Not Reported 00:15:15.494 Non-Operational Permissive Mode: Not Supported 00:15:15.494 00:15:15.494 Health Information 00:15:15.494 ================== 00:15:15.494 Critical Warnings: 00:15:15.494 Available Spare Space: OK 00:15:15.494 Temperature: OK 00:15:15.494 Device Reliability: OK 00:15:15.494 Read Only: No 00:15:15.494 Volatile Memory Backup: OK 00:15:15.494 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:15.494 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:15.494 Available Spare: 0% 00:15:15.494 Available Sp[2024-07-14 02:02:21.147401] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:15.494 [2024-07-14 02:02:21.147417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:15.494 [2024-07-14 02:02:21.147461] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:15.494 [2024-07-14 02:02:21.147479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.494 [2024-07-14 02:02:21.147490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.494 [2024-07-14 02:02:21.147499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.494 [2024-07-14 02:02:21.147509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.494 [2024-07-14 02:02:21.151879] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:15.494 [2024-07-14 02:02:21.151902] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:15.494 [2024-07-14 02:02:21.151982] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:15.494 [2024-07-14 02:02:21.152063] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:15.494 [2024-07-14 02:02:21.152078] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:15.494 [2024-07-14 02:02:21.152991] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:15.494 [2024-07-14 02:02:21.153015] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:15.494 [2024-07-14 02:02:21.153072] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:15.494 [2024-07-14 02:02:21.155031] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:15.754 are Threshold: 0% 00:15:15.754 Life Percentage Used: 0% 00:15:15.754 Data Units Read: 0 00:15:15.754 Data Units Written: 0 00:15:15.754 Host Read Commands: 0 00:15:15.754 Host Write Commands: 0 00:15:15.754 Controller Busy Time: 0 minutes 00:15:15.754 Power Cycles: 0 00:15:15.754 Power On Hours: 0 hours 00:15:15.754 Unsafe Shutdowns: 0 00:15:15.754 Unrecoverable Media Errors: 0 00:15:15.754 Lifetime Error Log Entries: 0 00:15:15.754 Warning Temperature Time: 0 minutes 00:15:15.754 Critical Temperature Time: 0 minutes 00:15:15.754 00:15:15.754 Number of Queues 00:15:15.754 ================ 00:15:15.754 Number of I/O Submission Queues: 127 00:15:15.754 Number of I/O Completion Queues: 127 00:15:15.754 00:15:15.754 Active Namespaces 00:15:15.754 ================= 00:15:15.754 Namespace ID:1 00:15:15.754 Error Recovery Timeout: Unlimited 00:15:15.754 Command Set Identifier: NVM (00h) 00:15:15.754 Deallocate: Supported 00:15:15.754 Deallocated/Unwritten Error: Not Supported 00:15:15.754 Deallocated Read Value: Unknown 00:15:15.754 Deallocate in Write Zeroes: Not Supported 00:15:15.754 Deallocated Guard Field: 0xFFFF 00:15:15.754 Flush: Supported 00:15:15.754 Reservation: Supported 00:15:15.754 Namespace Sharing Capabilities: Multiple Controllers 00:15:15.754 Size (in LBAs): 131072 (0GiB) 00:15:15.754 Capacity (in LBAs): 131072 (0GiB) 00:15:15.754 Utilization (in LBAs): 131072 (0GiB) 00:15:15.754 NGUID: 8FCB4F7806D24299BB0AC7F2B289A8A0 00:15:15.754 UUID: 8fcb4f78-06d2-4299-bb0a-c7f2b289a8a0 00:15:15.754 Thin Provisioning: Not Supported 00:15:15.754 Per-NS Atomic Units: Yes 00:15:15.754 Atomic Boundary Size (Normal): 0 00:15:15.754 Atomic Boundary Size (PFail): 0 00:15:15.754 Atomic Boundary Offset: 0 00:15:15.754 Maximum Single Source Range Length: 65535 00:15:15.754 Maximum Copy Length: 65535 00:15:15.754 Maximum Source Range Count: 1 00:15:15.754 NGUID/EUI64 Never Reused: No 00:15:15.754 Namespace Write Protected: No 00:15:15.754 Number of LBA Formats: 1 00:15:15.754 Current LBA Format: LBA Format #00 00:15:15.754 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:15.754 00:15:15.754 02:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:15.754 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.754 [2024-07-14 02:02:21.381719] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:21.050 Initializing NVMe Controllers 00:15:21.050 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:21.050 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:21.050 Initialization complete. Launching workers. 00:15:21.050 ======================================================== 00:15:21.050 Latency(us) 00:15:21.050 Device Information : IOPS MiB/s Average min max 00:15:21.050 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34990.00 136.68 3659.28 1173.82 7608.42 00:15:21.050 ======================================================== 00:15:21.050 Total : 34990.00 136.68 3659.28 1173.82 7608.42 00:15:21.050 00:15:21.050 [2024-07-14 02:02:26.405277] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:21.050 02:02:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:21.050 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.050 [2024-07-14 02:02:26.643450] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:26.323 Initializing NVMe Controllers 00:15:26.323 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:26.323 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:26.323 Initialization complete. Launching workers. 00:15:26.323 ======================================================== 00:15:26.323 Latency(us) 00:15:26.323 Device Information : IOPS MiB/s Average min max 00:15:26.323 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16000.00 62.50 8006.80 4976.20 15959.42 00:15:26.323 ======================================================== 00:15:26.323 Total : 16000.00 62.50 8006.80 4976.20 15959.42 00:15:26.323 00:15:26.323 [2024-07-14 02:02:31.679209] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:26.323 02:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:26.323 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.323 [2024-07-14 02:02:31.901321] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:31.595 [2024-07-14 02:02:36.969185] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:31.595 Initializing NVMe Controllers 00:15:31.595 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:31.595 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:31.595 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:31.595 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:31.595 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:31.595 Initialization complete. Launching workers. 00:15:31.595 Starting thread on core 2 00:15:31.595 Starting thread on core 3 00:15:31.595 Starting thread on core 1 00:15:31.595 02:02:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:31.595 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.595 [2024-07-14 02:02:37.262196] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:34.888 [2024-07-14 02:02:40.320125] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:34.888 Initializing NVMe Controllers 00:15:34.888 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:34.888 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:34.888 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:34.888 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:34.888 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:34.888 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:34.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:34.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:34.888 Initialization complete. Launching workers. 00:15:34.888 Starting thread on core 1 with urgent priority queue 00:15:34.888 Starting thread on core 2 with urgent priority queue 00:15:34.888 Starting thread on core 3 with urgent priority queue 00:15:34.888 Starting thread on core 0 with urgent priority queue 00:15:34.888 SPDK bdev Controller (SPDK1 ) core 0: 5397.00 IO/s 18.53 secs/100000 ios 00:15:34.888 SPDK bdev Controller (SPDK1 ) core 1: 5852.67 IO/s 17.09 secs/100000 ios 00:15:34.888 SPDK bdev Controller (SPDK1 ) core 2: 5937.33 IO/s 16.84 secs/100000 ios 00:15:34.888 SPDK bdev Controller (SPDK1 ) core 3: 5838.00 IO/s 17.13 secs/100000 ios 00:15:34.888 ======================================================== 00:15:34.888 00:15:34.888 02:02:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:34.888 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.147 [2024-07-14 02:02:40.621423] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:35.147 Initializing NVMe Controllers 00:15:35.147 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:35.147 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:35.147 Namespace ID: 1 size: 0GB 00:15:35.147 Initialization complete. 00:15:35.147 INFO: using host memory buffer for IO 00:15:35.147 Hello world! 00:15:35.147 [2024-07-14 02:02:40.655022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:35.147 02:02:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:35.147 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.417 [2024-07-14 02:02:40.959362] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:36.350 Initializing NVMe Controllers 00:15:36.350 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:36.350 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:36.350 Initialization complete. Launching workers. 00:15:36.350 submit (in ns) avg, min, max = 7813.0, 3495.6, 4008872.2 00:15:36.350 complete (in ns) avg, min, max = 23024.7, 2064.4, 4018481.1 00:15:36.350 00:15:36.350 Submit histogram 00:15:36.350 ================ 00:15:36.350 Range in us Cumulative Count 00:15:36.351 3.484 - 3.508: 0.0224% ( 3) 00:15:36.351 3.508 - 3.532: 0.5293% ( 68) 00:15:36.351 3.532 - 3.556: 1.7520% ( 164) 00:15:36.351 3.556 - 3.579: 5.0772% ( 446) 00:15:36.351 3.579 - 3.603: 10.9968% ( 794) 00:15:36.351 3.603 - 3.627: 19.6749% ( 1164) 00:15:36.351 3.627 - 3.650: 28.3233% ( 1160) 00:15:36.351 3.650 - 3.674: 36.1739% ( 1053) 00:15:36.351 3.674 - 3.698: 42.7570% ( 883) 00:15:36.351 3.698 - 3.721: 49.8397% ( 950) 00:15:36.351 3.721 - 3.745: 55.0436% ( 698) 00:15:36.351 3.745 - 3.769: 59.6511% ( 618) 00:15:36.351 3.769 - 3.793: 63.1402% ( 468) 00:15:36.351 3.793 - 3.816: 66.5847% ( 462) 00:15:36.351 3.816 - 3.840: 70.1036% ( 472) 00:15:36.351 3.840 - 3.864: 74.3383% ( 568) 00:15:36.351 3.864 - 3.887: 78.4537% ( 552) 00:15:36.351 3.887 - 3.911: 81.7341% ( 440) 00:15:36.351 3.911 - 3.935: 84.7312% ( 402) 00:15:36.351 3.935 - 3.959: 86.8188% ( 280) 00:15:36.351 3.959 - 3.982: 88.6677% ( 248) 00:15:36.351 3.982 - 4.006: 90.1439% ( 198) 00:15:36.351 4.006 - 4.030: 91.2100% ( 143) 00:15:36.351 4.030 - 4.053: 92.4625% ( 168) 00:15:36.351 4.053 - 4.077: 93.4094% ( 127) 00:15:36.351 4.077 - 4.101: 94.1624% ( 101) 00:15:36.351 4.101 - 4.124: 94.7886% ( 84) 00:15:36.351 4.124 - 4.148: 95.3403% ( 74) 00:15:36.351 4.148 - 4.172: 95.7653% ( 57) 00:15:36.351 4.172 - 4.196: 96.0859% ( 43) 00:15:36.351 4.196 - 4.219: 96.3245% ( 32) 00:15:36.351 4.219 - 4.243: 96.4810% ( 21) 00:15:36.351 4.243 - 4.267: 96.6525% ( 23) 00:15:36.351 4.267 - 4.290: 96.7569% ( 14) 00:15:36.351 4.290 - 4.314: 96.8762% ( 16) 00:15:36.351 4.314 - 4.338: 96.9582% ( 11) 00:15:36.351 4.338 - 4.361: 97.0700% ( 15) 00:15:36.351 4.361 - 4.385: 97.1222% ( 7) 00:15:36.351 4.385 - 4.409: 97.1595% ( 5) 00:15:36.351 4.409 - 4.433: 97.1818% ( 3) 00:15:36.351 4.433 - 4.456: 97.2191% ( 5) 00:15:36.351 4.456 - 4.480: 97.2489% ( 4) 00:15:36.351 4.480 - 4.504: 97.2788% ( 4) 00:15:36.351 4.504 - 4.527: 97.2937% ( 2) 00:15:36.351 4.527 - 4.551: 97.3086% ( 2) 00:15:36.351 4.551 - 4.575: 97.3384% ( 4) 00:15:36.351 4.575 - 4.599: 97.3757% ( 5) 00:15:36.351 4.599 - 4.622: 97.4130% ( 5) 00:15:36.351 4.622 - 4.646: 97.4502% ( 5) 00:15:36.351 4.646 - 4.670: 97.4875% ( 5) 00:15:36.351 4.670 - 4.693: 97.5173% ( 4) 00:15:36.351 4.693 - 4.717: 97.5546% ( 5) 00:15:36.351 4.717 - 4.741: 97.6292% ( 10) 00:15:36.351 4.741 - 4.764: 97.6664% ( 5) 00:15:36.351 4.764 - 4.788: 97.7485% ( 11) 00:15:36.351 4.788 - 4.812: 97.7857% ( 5) 00:15:36.351 4.812 - 4.836: 97.8156% ( 4) 00:15:36.351 4.836 - 4.859: 97.8528% ( 5) 00:15:36.351 4.859 - 4.883: 97.8827% ( 4) 00:15:36.351 4.883 - 4.907: 97.8976% ( 2) 00:15:36.351 4.907 - 4.930: 97.9498% ( 7) 00:15:36.351 4.954 - 4.978: 97.9647% ( 2) 00:15:36.351 4.978 - 5.001: 97.9796% ( 2) 00:15:36.351 5.001 - 5.025: 98.0019% ( 3) 00:15:36.351 5.025 - 5.049: 98.0392% ( 5) 00:15:36.351 5.049 - 5.073: 98.0467% ( 1) 00:15:36.351 5.073 - 5.096: 98.0616% ( 2) 00:15:36.351 5.096 - 5.120: 98.0839% ( 3) 00:15:36.351 5.120 - 5.144: 98.0989% ( 2) 00:15:36.351 5.144 - 5.167: 98.1063% ( 1) 00:15:36.351 5.167 - 5.191: 98.1361% ( 4) 00:15:36.351 5.191 - 5.215: 98.1510% ( 2) 00:15:36.351 5.215 - 5.239: 98.1585% ( 1) 00:15:36.351 5.262 - 5.286: 98.1734% ( 2) 00:15:36.351 5.310 - 5.333: 98.1883% ( 2) 00:15:36.351 5.333 - 5.357: 98.2032% ( 2) 00:15:36.351 5.381 - 5.404: 98.2181% ( 2) 00:15:36.351 5.523 - 5.547: 98.2256% ( 1) 00:15:36.351 5.547 - 5.570: 98.2331% ( 1) 00:15:36.351 5.594 - 5.618: 98.2480% ( 2) 00:15:36.351 5.618 - 5.641: 98.2554% ( 1) 00:15:36.351 5.641 - 5.665: 98.2778% ( 3) 00:15:36.351 5.760 - 5.784: 98.2927% ( 2) 00:15:36.351 5.807 - 5.831: 98.3002% ( 1) 00:15:36.351 5.879 - 5.902: 98.3076% ( 1) 00:15:36.351 5.973 - 5.997: 98.3151% ( 1) 00:15:36.351 5.997 - 6.021: 98.3225% ( 1) 00:15:36.351 6.021 - 6.044: 98.3300% ( 1) 00:15:36.351 6.044 - 6.068: 98.3374% ( 1) 00:15:36.351 6.116 - 6.163: 98.3449% ( 1) 00:15:36.351 6.163 - 6.210: 98.3523% ( 1) 00:15:36.351 6.258 - 6.305: 98.3747% ( 3) 00:15:36.351 6.447 - 6.495: 98.3822% ( 1) 00:15:36.351 6.542 - 6.590: 98.3896% ( 1) 00:15:36.351 6.637 - 6.684: 98.3971% ( 1) 00:15:36.351 6.779 - 6.827: 98.4045% ( 1) 00:15:36.351 6.921 - 6.969: 98.4120% ( 1) 00:15:36.351 6.969 - 7.016: 98.4194% ( 1) 00:15:36.351 7.016 - 7.064: 98.4269% ( 1) 00:15:36.351 7.064 - 7.111: 98.4344% ( 1) 00:15:36.351 7.111 - 7.159: 98.4493% ( 2) 00:15:36.351 7.253 - 7.301: 98.4567% ( 1) 00:15:36.351 7.301 - 7.348: 98.4642% ( 1) 00:15:36.351 7.396 - 7.443: 98.4791% ( 2) 00:15:36.351 7.538 - 7.585: 98.5015% ( 3) 00:15:36.351 7.633 - 7.680: 98.5089% ( 1) 00:15:36.351 7.680 - 7.727: 98.5238% ( 2) 00:15:36.351 7.727 - 7.775: 98.5313% ( 1) 00:15:36.351 7.775 - 7.822: 98.5387% ( 1) 00:15:36.351 7.822 - 7.870: 98.5462% ( 1) 00:15:36.351 7.917 - 7.964: 98.5686% ( 3) 00:15:36.351 8.012 - 8.059: 98.5835% ( 2) 00:15:36.351 8.154 - 8.201: 98.5909% ( 1) 00:15:36.351 8.201 - 8.249: 98.6207% ( 4) 00:15:36.351 8.344 - 8.391: 98.6282% ( 1) 00:15:36.351 8.391 - 8.439: 98.6357% ( 1) 00:15:36.351 8.439 - 8.486: 98.6431% ( 1) 00:15:36.351 8.676 - 8.723: 98.6506% ( 1) 00:15:36.351 8.723 - 8.770: 98.6729% ( 3) 00:15:36.351 8.865 - 8.913: 98.6878% ( 2) 00:15:36.351 9.007 - 9.055: 98.6953% ( 1) 00:15:36.351 9.055 - 9.102: 98.7028% ( 1) 00:15:36.351 9.150 - 9.197: 98.7102% ( 1) 00:15:36.351 9.197 - 9.244: 98.7177% ( 1) 00:15:36.351 9.244 - 9.292: 98.7251% ( 1) 00:15:36.351 9.339 - 9.387: 98.7326% ( 1) 00:15:36.351 9.387 - 9.434: 98.7400% ( 1) 00:15:36.351 9.576 - 9.624: 98.7624% ( 3) 00:15:36.351 9.671 - 9.719: 98.7699% ( 1) 00:15:36.351 9.861 - 9.908: 98.7773% ( 1) 00:15:36.351 10.240 - 10.287: 98.7848% ( 1) 00:15:36.351 10.287 - 10.335: 98.7922% ( 1) 00:15:36.351 10.430 - 10.477: 98.7997% ( 1) 00:15:36.351 10.477 - 10.524: 98.8071% ( 1) 00:15:36.351 10.572 - 10.619: 98.8146% ( 1) 00:15:36.351 10.619 - 10.667: 98.8220% ( 1) 00:15:36.351 10.714 - 10.761: 98.8369% ( 2) 00:15:36.351 10.761 - 10.809: 98.8444% ( 1) 00:15:36.351 10.809 - 10.856: 98.8519% ( 1) 00:15:36.351 11.046 - 11.093: 98.8593% ( 1) 00:15:36.351 11.188 - 11.236: 98.8668% ( 1) 00:15:36.351 11.236 - 11.283: 98.8742% ( 1) 00:15:36.351 11.520 - 11.567: 98.8817% ( 1) 00:15:36.351 11.662 - 11.710: 98.8891% ( 1) 00:15:36.351 11.899 - 11.947: 98.9040% ( 2) 00:15:36.351 12.136 - 12.231: 98.9339% ( 4) 00:15:36.351 12.326 - 12.421: 98.9413% ( 1) 00:15:36.351 12.610 - 12.705: 98.9488% ( 1) 00:15:36.351 12.800 - 12.895: 98.9637% ( 2) 00:15:36.351 13.084 - 13.179: 98.9786% ( 2) 00:15:36.351 13.179 - 13.274: 98.9861% ( 1) 00:15:36.351 13.274 - 13.369: 99.0010% ( 2) 00:15:36.351 13.464 - 13.559: 99.0159% ( 2) 00:15:36.351 13.559 - 13.653: 99.0233% ( 1) 00:15:36.351 13.653 - 13.748: 99.0308% ( 1) 00:15:36.351 13.843 - 13.938: 99.0382% ( 1) 00:15:36.351 13.938 - 14.033: 99.0457% ( 1) 00:15:36.351 14.033 - 14.127: 99.0532% ( 1) 00:15:36.351 14.222 - 14.317: 99.0681% ( 2) 00:15:36.351 14.412 - 14.507: 99.0755% ( 1) 00:15:36.351 14.601 - 14.696: 99.0830% ( 1) 00:15:36.351 14.791 - 14.886: 99.0904% ( 1) 00:15:36.351 14.886 - 14.981: 99.0979% ( 1) 00:15:36.351 15.076 - 15.170: 99.1053% ( 1) 00:15:36.351 15.360 - 15.455: 99.1128% ( 1) 00:15:36.351 17.067 - 17.161: 99.1203% ( 1) 00:15:36.351 17.161 - 17.256: 99.1277% ( 1) 00:15:36.351 17.256 - 17.351: 99.1352% ( 1) 00:15:36.351 17.351 - 17.446: 99.1501% ( 2) 00:15:36.351 17.446 - 17.541: 99.1948% ( 6) 00:15:36.351 17.541 - 17.636: 99.2246% ( 4) 00:15:36.351 17.636 - 17.730: 99.2395% ( 2) 00:15:36.351 17.730 - 17.825: 99.2768% ( 5) 00:15:36.351 17.825 - 17.920: 99.2992% ( 3) 00:15:36.351 17.920 - 18.015: 99.3216% ( 3) 00:15:36.351 18.015 - 18.110: 99.3887% ( 9) 00:15:36.351 18.110 - 18.204: 99.4483% ( 8) 00:15:36.351 18.204 - 18.299: 99.4930% ( 6) 00:15:36.352 18.299 - 18.394: 99.5601% ( 9) 00:15:36.352 18.394 - 18.489: 99.6049% ( 6) 00:15:36.352 18.489 - 18.584: 99.6570% ( 7) 00:15:36.352 18.584 - 18.679: 99.6720% ( 2) 00:15:36.352 18.679 - 18.773: 99.7018% ( 4) 00:15:36.352 18.773 - 18.868: 99.7316% ( 4) 00:15:36.352 18.868 - 18.963: 99.7465% ( 2) 00:15:36.352 18.963 - 19.058: 99.7689% ( 3) 00:15:36.352 19.247 - 19.342: 99.7763% ( 1) 00:15:36.352 19.342 - 19.437: 99.7838% ( 1) 00:15:36.352 19.532 - 19.627: 99.7912% ( 1) 00:15:36.352 19.627 - 19.721: 99.7987% ( 1) 00:15:36.352 19.721 - 19.816: 99.8062% ( 1) 00:15:36.352 19.816 - 19.911: 99.8211% ( 2) 00:15:36.352 20.480 - 20.575: 99.8285% ( 1) 00:15:36.352 21.428 - 21.523: 99.8360% ( 1) 00:15:36.352 21.807 - 21.902: 99.8434% ( 1) 00:15:36.352 21.997 - 22.092: 99.8509% ( 1) 00:15:36.352 22.281 - 22.376: 99.8583% ( 1) 00:15:36.352 22.756 - 22.850: 99.8658% ( 1) 00:15:36.352 24.462 - 24.652: 99.8733% ( 1) 00:15:36.352 26.169 - 26.359: 99.8807% ( 1) 00:15:36.352 27.876 - 28.065: 99.8882% ( 1) 00:15:36.352 28.444 - 28.634: 99.8956% ( 1) 00:15:36.352 29.582 - 29.772: 99.9031% ( 1) 00:15:36.352 3980.705 - 4004.978: 99.9925% ( 12) 00:15:36.352 4004.978 - 4029.250: 100.0000% ( 1) 00:15:36.352 00:15:36.352 Complete histogram 00:15:36.352 ================== 00:15:36.352 Range in us Cumulative Count 00:15:36.352 2.062 - 2.074: 1.6402% ( 220) 00:15:36.352 2.074 - 2.086: 32.5580% ( 4147) 00:15:36.352 2.086 - 2.098: 41.7655% ( 1235) 00:15:36.352 2.098 - 2.110: 46.7606% ( 670) 00:15:36.352 2.110 - 2.121: 59.6585% ( 1730) 00:15:36.352 2.121 - 2.133: 61.7684% ( 283) 00:15:36.352 2.133 - 2.145: 65.6900% ( 526) 00:15:36.352 2.145 - 2.157: 75.4417% ( 1308) 00:15:36.352 2.157 - 2.169: 77.0074% ( 210) 00:15:36.352 2.169 - 2.181: 79.1247% ( 284) 00:15:36.352 2.181 - 2.193: 82.3529% ( 433) 00:15:36.352 2.193 - 2.204: 83.0836% ( 98) 00:15:36.352 2.204 - 2.216: 84.3361% ( 168) 00:15:36.352 2.216 - 2.228: 88.0862% ( 503) 00:15:36.352 2.228 - 2.240: 89.9277% ( 247) 00:15:36.352 2.240 - 2.252: 91.9556% ( 272) 00:15:36.352 2.252 - 2.264: 93.4690% ( 203) 00:15:36.352 2.264 - 2.276: 93.8194% ( 47) 00:15:36.352 2.276 - 2.287: 94.1549% ( 45) 00:15:36.352 2.287 - 2.299: 94.5277% ( 50) 00:15:36.352 2.299 - 2.311: 95.1018% ( 77) 00:15:36.352 2.311 - 2.323: 95.6013% ( 67) 00:15:36.352 2.323 - 2.335: 95.7206% ( 16) 00:15:36.352 2.335 - 2.347: 95.7728% ( 7) 00:15:36.352 2.347 - 2.359: 95.9293% ( 21) 00:15:36.352 2.359 - 2.370: 96.1828% ( 34) 00:15:36.352 2.370 - 2.382: 96.4512% ( 36) 00:15:36.352 2.382 - 2.394: 96.8463% ( 53) 00:15:36.352 2.394 - 2.406: 97.1744% ( 44) 00:15:36.352 2.406 - 2.418: 97.3309% ( 21) 00:15:36.352 2.418 - 2.430: 97.4651% ( 18) 00:15:36.352 2.430 - 2.441: 97.6217% ( 21) 00:15:36.352 2.441 - 2.453: 97.7559% ( 18) 00:15:36.352 2.453 - 2.465: 97.8603% ( 14) 00:15:36.352 2.465 - 2.477: 97.9647% ( 14) 00:15:36.352 2.477 - 2.489: 98.0318% ( 9) 00:15:36.352 2.489 - 2.501: 98.0914% ( 8) 00:15:36.352 2.501 - 2.513: 98.1436% ( 7) 00:15:36.352 2.513 - 2.524: 98.1734% ( 4) 00:15:36.352 2.524 - 2.536: 98.2032% ( 4) 00:15:36.352 2.536 - 2.548: 98.2256% ( 3) 00:15:36.352 2.548 - 2.560: 98.2331% ( 1) 00:15:36.352 2.607 - 2.619: 98.2405% ( 1) 00:15:36.352 2.619 - 2.631: 98.2480% ( 1) 00:15:36.352 2.679 - 2.690: 98.2554% ( 1) 00:15:36.352 2.714 - 2.726: 98.2629% ( 1) 00:15:36.352 2.750 - 2.761: 9[2024-07-14 02:02:41.981556] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:36.352 8.2703% ( 1) 00:15:36.352 2.809 - 2.821: 98.2852% ( 2) 00:15:36.352 2.844 - 2.856: 98.2927% ( 1) 00:15:36.352 2.939 - 2.951: 98.3002% ( 1) 00:15:36.352 2.987 - 2.999: 98.3076% ( 1) 00:15:36.352 3.010 - 3.022: 98.3151% ( 1) 00:15:36.352 3.034 - 3.058: 98.3225% ( 1) 00:15:36.352 3.058 - 3.081: 98.3300% ( 1) 00:15:36.352 3.081 - 3.105: 98.3374% ( 1) 00:15:36.352 3.105 - 3.129: 98.3449% ( 1) 00:15:36.352 3.153 - 3.176: 98.3598% ( 2) 00:15:36.352 3.200 - 3.224: 98.3673% ( 1) 00:15:36.352 3.271 - 3.295: 98.3822% ( 2) 00:15:36.352 3.295 - 3.319: 98.3971% ( 2) 00:15:36.352 3.319 - 3.342: 98.4120% ( 2) 00:15:36.352 3.342 - 3.366: 98.4344% ( 3) 00:15:36.352 3.366 - 3.390: 98.4493% ( 2) 00:15:36.352 3.390 - 3.413: 98.4642% ( 2) 00:15:36.352 3.413 - 3.437: 98.4865% ( 3) 00:15:36.352 3.437 - 3.461: 98.5164% ( 4) 00:15:36.352 3.484 - 3.508: 98.5238% ( 1) 00:15:36.352 3.508 - 3.532: 98.5313% ( 1) 00:15:36.352 3.532 - 3.556: 98.5536% ( 3) 00:15:36.352 3.556 - 3.579: 98.5909% ( 5) 00:15:36.352 3.579 - 3.603: 98.5984% ( 1) 00:15:36.352 3.627 - 3.650: 98.6058% ( 1) 00:15:36.352 3.650 - 3.674: 98.6282% ( 3) 00:15:36.352 3.674 - 3.698: 98.6357% ( 1) 00:15:36.352 3.698 - 3.721: 98.6431% ( 1) 00:15:36.352 3.745 - 3.769: 98.6506% ( 1) 00:15:36.352 3.793 - 3.816: 98.6729% ( 3) 00:15:36.352 3.816 - 3.840: 98.6804% ( 1) 00:15:36.352 3.840 - 3.864: 98.6878% ( 1) 00:15:36.352 3.864 - 3.887: 98.7028% ( 2) 00:15:36.352 3.887 - 3.911: 98.7102% ( 1) 00:15:36.352 3.911 - 3.935: 98.7251% ( 2) 00:15:36.352 4.053 - 4.077: 98.7326% ( 1) 00:15:36.352 4.101 - 4.124: 98.7400% ( 1) 00:15:36.352 4.124 - 4.148: 98.7475% ( 1) 00:15:36.352 4.267 - 4.290: 98.7549% ( 1) 00:15:36.352 4.433 - 4.456: 98.7699% ( 2) 00:15:36.352 5.025 - 5.049: 98.7848% ( 2) 00:15:36.352 5.357 - 5.381: 98.7922% ( 1) 00:15:36.352 5.381 - 5.404: 98.7997% ( 1) 00:15:36.352 5.428 - 5.452: 98.8071% ( 1) 00:15:36.352 5.689 - 5.713: 98.8146% ( 1) 00:15:36.352 5.760 - 5.784: 98.8220% ( 1) 00:15:36.352 5.784 - 5.807: 98.8369% ( 2) 00:15:36.352 5.879 - 5.902: 98.8444% ( 1) 00:15:36.352 6.068 - 6.116: 98.8593% ( 2) 00:15:36.352 6.210 - 6.258: 98.8668% ( 1) 00:15:36.352 6.305 - 6.353: 98.8742% ( 1) 00:15:36.352 6.353 - 6.400: 98.8817% ( 1) 00:15:36.352 6.400 - 6.447: 98.8891% ( 1) 00:15:36.352 6.542 - 6.590: 98.8966% ( 1) 00:15:36.352 6.827 - 6.874: 98.9040% ( 1) 00:15:36.352 7.253 - 7.301: 98.9115% ( 1) 00:15:36.352 7.443 - 7.490: 98.9190% ( 1) 00:15:36.352 9.481 - 9.529: 98.9264% ( 1) 00:15:36.352 11.141 - 11.188: 98.9339% ( 1) 00:15:36.352 15.360 - 15.455: 98.9413% ( 1) 00:15:36.352 15.455 - 15.550: 98.9488% ( 1) 00:15:36.352 15.550 - 15.644: 98.9562% ( 1) 00:15:36.352 15.739 - 15.834: 98.9637% ( 1) 00:15:36.352 15.834 - 15.929: 99.0010% ( 5) 00:15:36.352 15.929 - 16.024: 99.0233% ( 3) 00:15:36.352 16.024 - 16.119: 99.0457% ( 3) 00:15:36.352 16.119 - 16.213: 99.0830% ( 5) 00:15:36.352 16.213 - 16.308: 99.1128% ( 4) 00:15:36.352 16.308 - 16.403: 99.1501% ( 5) 00:15:36.352 16.403 - 16.498: 99.1650% ( 2) 00:15:36.352 16.498 - 16.593: 99.2023% ( 5) 00:15:36.352 16.593 - 16.687: 99.2172% ( 2) 00:15:36.352 16.687 - 16.782: 99.2545% ( 5) 00:15:36.352 16.782 - 16.877: 99.3066% ( 7) 00:15:36.352 16.877 - 16.972: 99.3439% ( 5) 00:15:36.352 16.972 - 17.067: 99.3812% ( 5) 00:15:36.352 17.067 - 17.161: 99.4185% ( 5) 00:15:36.352 17.256 - 17.351: 99.4259% ( 1) 00:15:36.352 17.636 - 17.730: 99.4408% ( 2) 00:15:36.352 18.394 - 18.489: 99.4483% ( 1) 00:15:36.352 18.584 - 18.679: 99.4558% ( 1) 00:15:36.352 19.342 - 19.437: 99.4707% ( 2) 00:15:36.352 20.385 - 20.480: 99.4781% ( 1) 00:15:36.352 3034.074 - 3046.210: 99.4856% ( 1) 00:15:36.352 3980.705 - 4004.978: 99.9404% ( 61) 00:15:36.352 4004.978 - 4029.250: 100.0000% ( 8) 00:15:36.352 00:15:36.352 02:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:36.352 02:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:36.352 02:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:36.352 02:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:36.352 02:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:36.918 [ 00:15:36.918 { 00:15:36.918 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:36.918 "subtype": "Discovery", 00:15:36.918 "listen_addresses": [], 00:15:36.918 "allow_any_host": true, 00:15:36.918 "hosts": [] 00:15:36.918 }, 00:15:36.918 { 00:15:36.918 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:36.918 "subtype": "NVMe", 00:15:36.918 "listen_addresses": [ 00:15:36.918 { 00:15:36.918 "trtype": "VFIOUSER", 00:15:36.918 "adrfam": "IPv4", 00:15:36.918 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:36.918 "trsvcid": "0" 00:15:36.918 } 00:15:36.918 ], 00:15:36.918 "allow_any_host": true, 00:15:36.918 "hosts": [], 00:15:36.918 "serial_number": "SPDK1", 00:15:36.918 "model_number": "SPDK bdev Controller", 00:15:36.918 "max_namespaces": 32, 00:15:36.918 "min_cntlid": 1, 00:15:36.918 "max_cntlid": 65519, 00:15:36.918 "namespaces": [ 00:15:36.918 { 00:15:36.918 "nsid": 1, 00:15:36.918 "bdev_name": "Malloc1", 00:15:36.918 "name": "Malloc1", 00:15:36.918 "nguid": "8FCB4F7806D24299BB0AC7F2B289A8A0", 00:15:36.918 "uuid": "8fcb4f78-06d2-4299-bb0a-c7f2b289a8a0" 00:15:36.918 } 00:15:36.918 ] 00:15:36.918 }, 00:15:36.918 { 00:15:36.918 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:36.918 "subtype": "NVMe", 00:15:36.918 "listen_addresses": [ 00:15:36.918 { 00:15:36.918 "trtype": "VFIOUSER", 00:15:36.918 "adrfam": "IPv4", 00:15:36.919 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:36.919 "trsvcid": "0" 00:15:36.919 } 00:15:36.919 ], 00:15:36.919 "allow_any_host": true, 00:15:36.919 "hosts": [], 00:15:36.919 "serial_number": "SPDK2", 00:15:36.919 "model_number": "SPDK bdev Controller", 00:15:36.919 "max_namespaces": 32, 00:15:36.919 "min_cntlid": 1, 00:15:36.919 "max_cntlid": 65519, 00:15:36.919 "namespaces": [ 00:15:36.919 { 00:15:36.919 "nsid": 1, 00:15:36.919 "bdev_name": "Malloc2", 00:15:36.919 "name": "Malloc2", 00:15:36.919 "nguid": "E18D3847876144848DD01469DC3A808B", 00:15:36.919 "uuid": "e18d3847-8761-4484-8dd0-1469dc3a808b" 00:15:36.919 } 00:15:36.919 ] 00:15:36.919 } 00:15:36.919 ] 00:15:36.919 02:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:36.919 02:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1551583 00:15:36.919 02:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:36.919 02:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:36.919 02:02:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:36.919 02:02:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:36.919 02:02:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:36.919 02:02:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:36.919 02:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:36.919 02:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:36.919 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.919 [2024-07-14 02:02:42.492439] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:36.919 Malloc3 00:15:36.919 02:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:37.177 [2024-07-14 02:02:42.844992] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:37.177 02:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:37.435 Asynchronous Event Request test 00:15:37.435 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:37.435 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:37.435 Registering asynchronous event callbacks... 00:15:37.435 Starting namespace attribute notice tests for all controllers... 00:15:37.435 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:37.435 aer_cb - Changed Namespace 00:15:37.435 Cleaning up... 00:15:37.435 [ 00:15:37.435 { 00:15:37.435 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:37.435 "subtype": "Discovery", 00:15:37.435 "listen_addresses": [], 00:15:37.435 "allow_any_host": true, 00:15:37.435 "hosts": [] 00:15:37.435 }, 00:15:37.435 { 00:15:37.435 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:37.435 "subtype": "NVMe", 00:15:37.435 "listen_addresses": [ 00:15:37.435 { 00:15:37.435 "trtype": "VFIOUSER", 00:15:37.435 "adrfam": "IPv4", 00:15:37.435 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:37.435 "trsvcid": "0" 00:15:37.435 } 00:15:37.435 ], 00:15:37.435 "allow_any_host": true, 00:15:37.435 "hosts": [], 00:15:37.435 "serial_number": "SPDK1", 00:15:37.435 "model_number": "SPDK bdev Controller", 00:15:37.435 "max_namespaces": 32, 00:15:37.435 "min_cntlid": 1, 00:15:37.435 "max_cntlid": 65519, 00:15:37.435 "namespaces": [ 00:15:37.435 { 00:15:37.436 "nsid": 1, 00:15:37.436 "bdev_name": "Malloc1", 00:15:37.436 "name": "Malloc1", 00:15:37.436 "nguid": "8FCB4F7806D24299BB0AC7F2B289A8A0", 00:15:37.436 "uuid": "8fcb4f78-06d2-4299-bb0a-c7f2b289a8a0" 00:15:37.436 }, 00:15:37.436 { 00:15:37.436 "nsid": 2, 00:15:37.436 "bdev_name": "Malloc3", 00:15:37.436 "name": "Malloc3", 00:15:37.436 "nguid": "902842F9C52745CEA5E83BAAE3ADAA6F", 00:15:37.436 "uuid": "902842f9-c527-45ce-a5e8-3baae3adaa6f" 00:15:37.436 } 00:15:37.436 ] 00:15:37.436 }, 00:15:37.436 { 00:15:37.436 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:37.436 "subtype": "NVMe", 00:15:37.436 "listen_addresses": [ 00:15:37.436 { 00:15:37.436 "trtype": "VFIOUSER", 00:15:37.436 "adrfam": "IPv4", 00:15:37.436 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:37.436 "trsvcid": "0" 00:15:37.436 } 00:15:37.436 ], 00:15:37.436 "allow_any_host": true, 00:15:37.436 "hosts": [], 00:15:37.436 "serial_number": "SPDK2", 00:15:37.436 "model_number": "SPDK bdev Controller", 00:15:37.436 "max_namespaces": 32, 00:15:37.436 "min_cntlid": 1, 00:15:37.436 "max_cntlid": 65519, 00:15:37.436 "namespaces": [ 00:15:37.436 { 00:15:37.436 "nsid": 1, 00:15:37.436 "bdev_name": "Malloc2", 00:15:37.436 "name": "Malloc2", 00:15:37.436 "nguid": "E18D3847876144848DD01469DC3A808B", 00:15:37.436 "uuid": "e18d3847-8761-4484-8dd0-1469dc3a808b" 00:15:37.436 } 00:15:37.436 ] 00:15:37.436 } 00:15:37.436 ] 00:15:37.436 02:02:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1551583 00:15:37.436 02:02:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:37.436 02:02:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:37.436 02:02:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:37.436 02:02:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:37.696 [2024-07-14 02:02:43.135879] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:37.696 [2024-07-14 02:02:43.135919] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1551713 ] 00:15:37.696 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.696 [2024-07-14 02:02:43.168003] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:37.696 [2024-07-14 02:02:43.173339] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:37.696 [2024-07-14 02:02:43.173371] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f67236e2000 00:15:37.696 [2024-07-14 02:02:43.174334] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:37.696 [2024-07-14 02:02:43.175341] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:37.696 [2024-07-14 02:02:43.176352] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:37.696 [2024-07-14 02:02:43.177362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:37.696 [2024-07-14 02:02:43.178362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:37.696 [2024-07-14 02:02:43.179366] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:37.696 [2024-07-14 02:02:43.180376] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:37.696 [2024-07-14 02:02:43.181386] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:37.696 [2024-07-14 02:02:43.182402] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:37.696 [2024-07-14 02:02:43.182424] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6722496000 00:15:37.696 [2024-07-14 02:02:43.183535] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:37.696 [2024-07-14 02:02:43.198278] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:37.696 [2024-07-14 02:02:43.198312] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:37.696 [2024-07-14 02:02:43.200387] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:37.696 [2024-07-14 02:02:43.200437] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:37.696 [2024-07-14 02:02:43.200524] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:37.696 [2024-07-14 02:02:43.200548] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:37.696 [2024-07-14 02:02:43.200558] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:37.696 [2024-07-14 02:02:43.201388] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:37.696 [2024-07-14 02:02:43.201408] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:37.696 [2024-07-14 02:02:43.201420] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:37.696 [2024-07-14 02:02:43.202393] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:37.696 [2024-07-14 02:02:43.202413] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:37.696 [2024-07-14 02:02:43.202426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:37.696 [2024-07-14 02:02:43.203398] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:37.696 [2024-07-14 02:02:43.203418] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:37.696 [2024-07-14 02:02:43.205877] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:37.696 [2024-07-14 02:02:43.205898] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:37.696 [2024-07-14 02:02:43.205908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:37.696 [2024-07-14 02:02:43.205927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:37.696 [2024-07-14 02:02:43.206038] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:37.696 [2024-07-14 02:02:43.206047] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:37.696 [2024-07-14 02:02:43.206056] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:37.696 [2024-07-14 02:02:43.206410] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:37.696 [2024-07-14 02:02:43.207418] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:37.696 [2024-07-14 02:02:43.208426] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:37.696 [2024-07-14 02:02:43.209419] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:37.696 [2024-07-14 02:02:43.209498] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:37.696 [2024-07-14 02:02:43.210440] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:37.696 [2024-07-14 02:02:43.210460] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:37.696 [2024-07-14 02:02:43.210469] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:37.696 [2024-07-14 02:02:43.210492] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:37.696 [2024-07-14 02:02:43.210506] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:37.696 [2024-07-14 02:02:43.210529] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:37.696 [2024-07-14 02:02:43.210540] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:37.696 [2024-07-14 02:02:43.210560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:37.696 [2024-07-14 02:02:43.216884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:37.696 [2024-07-14 02:02:43.216909] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:37.696 [2024-07-14 02:02:43.216922] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:37.696 [2024-07-14 02:02:43.216930] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:37.696 [2024-07-14 02:02:43.216938] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:37.696 [2024-07-14 02:02:43.216946] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:37.696 [2024-07-14 02:02:43.216954] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:37.696 [2024-07-14 02:02:43.216962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:37.696 [2024-07-14 02:02:43.216979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:37.696 [2024-07-14 02:02:43.216995] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:37.696 [2024-07-14 02:02:43.224878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:37.696 [2024-07-14 02:02:43.224906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.696 [2024-07-14 02:02:43.224921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.696 [2024-07-14 02:02:43.224933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.696 [2024-07-14 02:02:43.224945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.696 [2024-07-14 02:02:43.224954] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:37.696 [2024-07-14 02:02:43.224969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:37.696 [2024-07-14 02:02:43.224984] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:37.697 [2024-07-14 02:02:43.232875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:37.697 [2024-07-14 02:02:43.232893] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:37.697 [2024-07-14 02:02:43.232903] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:37.697 [2024-07-14 02:02:43.232915] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:37.697 [2024-07-14 02:02:43.232925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:37.697 [2024-07-14 02:02:43.232939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:37.697 [2024-07-14 02:02:43.240892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:37.697 [2024-07-14 02:02:43.240962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:37.697 [2024-07-14 02:02:43.240977] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:37.697 [2024-07-14 02:02:43.240990] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:37.697 [2024-07-14 02:02:43.240998] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:37.697 [2024-07-14 02:02:43.241008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:37.697 [2024-07-14 02:02:43.248877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:37.697 [2024-07-14 02:02:43.248900] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:37.697 [2024-07-14 02:02:43.248936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:37.697 [2024-07-14 02:02:43.248955] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:37.697 [2024-07-14 02:02:43.248969] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:37.697 [2024-07-14 02:02:43.248978] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:37.697 [2024-07-14 02:02:43.248988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:37.697 [2024-07-14 02:02:43.256876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:37.697 [2024-07-14 02:02:43.256903] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:37.697 [2024-07-14 02:02:43.256919] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:37.697 [2024-07-14 02:02:43.256931] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:37.697 [2024-07-14 02:02:43.256940] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:37.697 [2024-07-14 02:02:43.256949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:37.697 [2024-07-14 02:02:43.264877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:37.697 [2024-07-14 02:02:43.264898] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:37.697 [2024-07-14 02:02:43.264911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:37.697 [2024-07-14 02:02:43.264925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:37.697 [2024-07-14 02:02:43.264935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:37.697 [2024-07-14 02:02:43.264944] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:37.697 [2024-07-14 02:02:43.264952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:37.697 [2024-07-14 02:02:43.264961] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:37.697 [2024-07-14 02:02:43.264969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:37.697 [2024-07-14 02:02:43.264977] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:37.697 [2024-07-14 02:02:43.265002] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:37.697 [2024-07-14 02:02:43.272879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:37.697 [2024-07-14 02:02:43.272905] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:37.697 [2024-07-14 02:02:43.280876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:37.697 [2024-07-14 02:02:43.280901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:37.697 [2024-07-14 02:02:43.288878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:37.697 [2024-07-14 02:02:43.288902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:37.697 [2024-07-14 02:02:43.296878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:37.697 [2024-07-14 02:02:43.296911] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:37.697 [2024-07-14 02:02:43.296922] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:37.697 [2024-07-14 02:02:43.296928] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:37.697 [2024-07-14 02:02:43.296934] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:37.697 [2024-07-14 02:02:43.296944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:37.697 [2024-07-14 02:02:43.296955] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:37.697 [2024-07-14 02:02:43.296963] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:37.697 [2024-07-14 02:02:43.296972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:37.697 [2024-07-14 02:02:43.296983] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:37.697 [2024-07-14 02:02:43.296991] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:37.697 [2024-07-14 02:02:43.297000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:37.697 [2024-07-14 02:02:43.297011] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:37.697 [2024-07-14 02:02:43.297019] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:37.697 [2024-07-14 02:02:43.297028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:37.697 [2024-07-14 02:02:43.304878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:37.697 [2024-07-14 02:02:43.304905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:37.697 [2024-07-14 02:02:43.304922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:37.697 [2024-07-14 02:02:43.304934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:37.697 ===================================================== 00:15:37.697 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:37.697 ===================================================== 00:15:37.697 Controller Capabilities/Features 00:15:37.697 ================================ 00:15:37.697 Vendor ID: 4e58 00:15:37.697 Subsystem Vendor ID: 4e58 00:15:37.697 Serial Number: SPDK2 00:15:37.697 Model Number: SPDK bdev Controller 00:15:37.697 Firmware Version: 24.09 00:15:37.697 Recommended Arb Burst: 6 00:15:37.697 IEEE OUI Identifier: 8d 6b 50 00:15:37.697 Multi-path I/O 00:15:37.697 May have multiple subsystem ports: Yes 00:15:37.697 May have multiple controllers: Yes 00:15:37.697 Associated with SR-IOV VF: No 00:15:37.697 Max Data Transfer Size: 131072 00:15:37.697 Max Number of Namespaces: 32 00:15:37.697 Max Number of I/O Queues: 127 00:15:37.697 NVMe Specification Version (VS): 1.3 00:15:37.697 NVMe Specification Version (Identify): 1.3 00:15:37.697 Maximum Queue Entries: 256 00:15:37.697 Contiguous Queues Required: Yes 00:15:37.697 Arbitration Mechanisms Supported 00:15:37.697 Weighted Round Robin: Not Supported 00:15:37.697 Vendor Specific: Not Supported 00:15:37.697 Reset Timeout: 15000 ms 00:15:37.697 Doorbell Stride: 4 bytes 00:15:37.697 NVM Subsystem Reset: Not Supported 00:15:37.697 Command Sets Supported 00:15:37.697 NVM Command Set: Supported 00:15:37.697 Boot Partition: Not Supported 00:15:37.697 Memory Page Size Minimum: 4096 bytes 00:15:37.697 Memory Page Size Maximum: 4096 bytes 00:15:37.697 Persistent Memory Region: Not Supported 00:15:37.697 Optional Asynchronous Events Supported 00:15:37.697 Namespace Attribute Notices: Supported 00:15:37.697 Firmware Activation Notices: Not Supported 00:15:37.697 ANA Change Notices: Not Supported 00:15:37.697 PLE Aggregate Log Change Notices: Not Supported 00:15:37.697 LBA Status Info Alert Notices: Not Supported 00:15:37.697 EGE Aggregate Log Change Notices: Not Supported 00:15:37.698 Normal NVM Subsystem Shutdown event: Not Supported 00:15:37.698 Zone Descriptor Change Notices: Not Supported 00:15:37.698 Discovery Log Change Notices: Not Supported 00:15:37.698 Controller Attributes 00:15:37.698 128-bit Host Identifier: Supported 00:15:37.698 Non-Operational Permissive Mode: Not Supported 00:15:37.698 NVM Sets: Not Supported 00:15:37.698 Read Recovery Levels: Not Supported 00:15:37.698 Endurance Groups: Not Supported 00:15:37.698 Predictable Latency Mode: Not Supported 00:15:37.698 Traffic Based Keep ALive: Not Supported 00:15:37.698 Namespace Granularity: Not Supported 00:15:37.698 SQ Associations: Not Supported 00:15:37.698 UUID List: Not Supported 00:15:37.698 Multi-Domain Subsystem: Not Supported 00:15:37.698 Fixed Capacity Management: Not Supported 00:15:37.698 Variable Capacity Management: Not Supported 00:15:37.698 Delete Endurance Group: Not Supported 00:15:37.698 Delete NVM Set: Not Supported 00:15:37.698 Extended LBA Formats Supported: Not Supported 00:15:37.698 Flexible Data Placement Supported: Not Supported 00:15:37.698 00:15:37.698 Controller Memory Buffer Support 00:15:37.698 ================================ 00:15:37.698 Supported: No 00:15:37.698 00:15:37.698 Persistent Memory Region Support 00:15:37.698 ================================ 00:15:37.698 Supported: No 00:15:37.698 00:15:37.698 Admin Command Set Attributes 00:15:37.698 ============================ 00:15:37.698 Security Send/Receive: Not Supported 00:15:37.698 Format NVM: Not Supported 00:15:37.698 Firmware Activate/Download: Not Supported 00:15:37.698 Namespace Management: Not Supported 00:15:37.698 Device Self-Test: Not Supported 00:15:37.698 Directives: Not Supported 00:15:37.698 NVMe-MI: Not Supported 00:15:37.698 Virtualization Management: Not Supported 00:15:37.698 Doorbell Buffer Config: Not Supported 00:15:37.698 Get LBA Status Capability: Not Supported 00:15:37.698 Command & Feature Lockdown Capability: Not Supported 00:15:37.698 Abort Command Limit: 4 00:15:37.698 Async Event Request Limit: 4 00:15:37.698 Number of Firmware Slots: N/A 00:15:37.698 Firmware Slot 1 Read-Only: N/A 00:15:37.698 Firmware Activation Without Reset: N/A 00:15:37.698 Multiple Update Detection Support: N/A 00:15:37.698 Firmware Update Granularity: No Information Provided 00:15:37.698 Per-Namespace SMART Log: No 00:15:37.698 Asymmetric Namespace Access Log Page: Not Supported 00:15:37.698 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:37.698 Command Effects Log Page: Supported 00:15:37.698 Get Log Page Extended Data: Supported 00:15:37.698 Telemetry Log Pages: Not Supported 00:15:37.698 Persistent Event Log Pages: Not Supported 00:15:37.698 Supported Log Pages Log Page: May Support 00:15:37.698 Commands Supported & Effects Log Page: Not Supported 00:15:37.698 Feature Identifiers & Effects Log Page:May Support 00:15:37.698 NVMe-MI Commands & Effects Log Page: May Support 00:15:37.698 Data Area 4 for Telemetry Log: Not Supported 00:15:37.698 Error Log Page Entries Supported: 128 00:15:37.698 Keep Alive: Supported 00:15:37.698 Keep Alive Granularity: 10000 ms 00:15:37.698 00:15:37.698 NVM Command Set Attributes 00:15:37.698 ========================== 00:15:37.698 Submission Queue Entry Size 00:15:37.698 Max: 64 00:15:37.698 Min: 64 00:15:37.698 Completion Queue Entry Size 00:15:37.698 Max: 16 00:15:37.698 Min: 16 00:15:37.698 Number of Namespaces: 32 00:15:37.698 Compare Command: Supported 00:15:37.698 Write Uncorrectable Command: Not Supported 00:15:37.698 Dataset Management Command: Supported 00:15:37.698 Write Zeroes Command: Supported 00:15:37.698 Set Features Save Field: Not Supported 00:15:37.698 Reservations: Not Supported 00:15:37.698 Timestamp: Not Supported 00:15:37.698 Copy: Supported 00:15:37.698 Volatile Write Cache: Present 00:15:37.698 Atomic Write Unit (Normal): 1 00:15:37.698 Atomic Write Unit (PFail): 1 00:15:37.698 Atomic Compare & Write Unit: 1 00:15:37.698 Fused Compare & Write: Supported 00:15:37.698 Scatter-Gather List 00:15:37.698 SGL Command Set: Supported (Dword aligned) 00:15:37.698 SGL Keyed: Not Supported 00:15:37.698 SGL Bit Bucket Descriptor: Not Supported 00:15:37.698 SGL Metadata Pointer: Not Supported 00:15:37.698 Oversized SGL: Not Supported 00:15:37.698 SGL Metadata Address: Not Supported 00:15:37.698 SGL Offset: Not Supported 00:15:37.698 Transport SGL Data Block: Not Supported 00:15:37.698 Replay Protected Memory Block: Not Supported 00:15:37.698 00:15:37.698 Firmware Slot Information 00:15:37.698 ========================= 00:15:37.698 Active slot: 1 00:15:37.698 Slot 1 Firmware Revision: 24.09 00:15:37.698 00:15:37.698 00:15:37.698 Commands Supported and Effects 00:15:37.698 ============================== 00:15:37.698 Admin Commands 00:15:37.698 -------------- 00:15:37.698 Get Log Page (02h): Supported 00:15:37.698 Identify (06h): Supported 00:15:37.698 Abort (08h): Supported 00:15:37.698 Set Features (09h): Supported 00:15:37.698 Get Features (0Ah): Supported 00:15:37.698 Asynchronous Event Request (0Ch): Supported 00:15:37.698 Keep Alive (18h): Supported 00:15:37.698 I/O Commands 00:15:37.698 ------------ 00:15:37.698 Flush (00h): Supported LBA-Change 00:15:37.698 Write (01h): Supported LBA-Change 00:15:37.698 Read (02h): Supported 00:15:37.698 Compare (05h): Supported 00:15:37.698 Write Zeroes (08h): Supported LBA-Change 00:15:37.698 Dataset Management (09h): Supported LBA-Change 00:15:37.698 Copy (19h): Supported LBA-Change 00:15:37.698 00:15:37.698 Error Log 00:15:37.698 ========= 00:15:37.698 00:15:37.698 Arbitration 00:15:37.698 =========== 00:15:37.698 Arbitration Burst: 1 00:15:37.698 00:15:37.698 Power Management 00:15:37.698 ================ 00:15:37.698 Number of Power States: 1 00:15:37.698 Current Power State: Power State #0 00:15:37.698 Power State #0: 00:15:37.698 Max Power: 0.00 W 00:15:37.698 Non-Operational State: Operational 00:15:37.698 Entry Latency: Not Reported 00:15:37.698 Exit Latency: Not Reported 00:15:37.698 Relative Read Throughput: 0 00:15:37.698 Relative Read Latency: 0 00:15:37.698 Relative Write Throughput: 0 00:15:37.698 Relative Write Latency: 0 00:15:37.698 Idle Power: Not Reported 00:15:37.698 Active Power: Not Reported 00:15:37.698 Non-Operational Permissive Mode: Not Supported 00:15:37.698 00:15:37.698 Health Information 00:15:37.698 ================== 00:15:37.698 Critical Warnings: 00:15:37.698 Available Spare Space: OK 00:15:37.698 Temperature: OK 00:15:37.698 Device Reliability: OK 00:15:37.698 Read Only: No 00:15:37.698 Volatile Memory Backup: OK 00:15:37.698 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:37.698 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:37.698 Available Spare: 0% 00:15:37.698 Available Sp[2024-07-14 02:02:43.305046] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:37.698 [2024-07-14 02:02:43.312890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:37.698 [2024-07-14 02:02:43.312942] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:37.698 [2024-07-14 02:02:43.312960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.698 [2024-07-14 02:02:43.312972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.698 [2024-07-14 02:02:43.312981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.698 [2024-07-14 02:02:43.312991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.698 [2024-07-14 02:02:43.313059] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:37.698 [2024-07-14 02:02:43.313080] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:37.698 [2024-07-14 02:02:43.314058] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:37.698 [2024-07-14 02:02:43.314132] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:37.698 [2024-07-14 02:02:43.314148] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:37.698 [2024-07-14 02:02:43.315067] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:37.698 [2024-07-14 02:02:43.315092] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:37.699 [2024-07-14 02:02:43.315145] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:37.699 [2024-07-14 02:02:43.317877] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:37.699 are Threshold: 0% 00:15:37.699 Life Percentage Used: 0% 00:15:37.699 Data Units Read: 0 00:15:37.699 Data Units Written: 0 00:15:37.699 Host Read Commands: 0 00:15:37.699 Host Write Commands: 0 00:15:37.699 Controller Busy Time: 0 minutes 00:15:37.699 Power Cycles: 0 00:15:37.699 Power On Hours: 0 hours 00:15:37.699 Unsafe Shutdowns: 0 00:15:37.699 Unrecoverable Media Errors: 0 00:15:37.699 Lifetime Error Log Entries: 0 00:15:37.699 Warning Temperature Time: 0 minutes 00:15:37.699 Critical Temperature Time: 0 minutes 00:15:37.699 00:15:37.699 Number of Queues 00:15:37.699 ================ 00:15:37.699 Number of I/O Submission Queues: 127 00:15:37.699 Number of I/O Completion Queues: 127 00:15:37.699 00:15:37.699 Active Namespaces 00:15:37.699 ================= 00:15:37.699 Namespace ID:1 00:15:37.699 Error Recovery Timeout: Unlimited 00:15:37.699 Command Set Identifier: NVM (00h) 00:15:37.699 Deallocate: Supported 00:15:37.699 Deallocated/Unwritten Error: Not Supported 00:15:37.699 Deallocated Read Value: Unknown 00:15:37.699 Deallocate in Write Zeroes: Not Supported 00:15:37.699 Deallocated Guard Field: 0xFFFF 00:15:37.699 Flush: Supported 00:15:37.699 Reservation: Supported 00:15:37.699 Namespace Sharing Capabilities: Multiple Controllers 00:15:37.699 Size (in LBAs): 131072 (0GiB) 00:15:37.699 Capacity (in LBAs): 131072 (0GiB) 00:15:37.699 Utilization (in LBAs): 131072 (0GiB) 00:15:37.699 NGUID: E18D3847876144848DD01469DC3A808B 00:15:37.699 UUID: e18d3847-8761-4484-8dd0-1469dc3a808b 00:15:37.699 Thin Provisioning: Not Supported 00:15:37.699 Per-NS Atomic Units: Yes 00:15:37.699 Atomic Boundary Size (Normal): 0 00:15:37.699 Atomic Boundary Size (PFail): 0 00:15:37.699 Atomic Boundary Offset: 0 00:15:37.699 Maximum Single Source Range Length: 65535 00:15:37.699 Maximum Copy Length: 65535 00:15:37.699 Maximum Source Range Count: 1 00:15:37.699 NGUID/EUI64 Never Reused: No 00:15:37.699 Namespace Write Protected: No 00:15:37.699 Number of LBA Formats: 1 00:15:37.699 Current LBA Format: LBA Format #00 00:15:37.699 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:37.699 00:15:37.699 02:02:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:37.958 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.958 [2024-07-14 02:02:43.547622] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.285 Initializing NVMe Controllers 00:15:43.285 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:43.285 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:43.285 Initialization complete. Launching workers. 00:15:43.285 ======================================================== 00:15:43.285 Latency(us) 00:15:43.285 Device Information : IOPS MiB/s Average min max 00:15:43.285 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35228.15 137.61 3632.85 1150.47 7342.21 00:15:43.285 ======================================================== 00:15:43.285 Total : 35228.15 137.61 3632.85 1150.47 7342.21 00:15:43.285 00:15:43.285 [2024-07-14 02:02:48.653243] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.285 02:02:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:43.285 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.285 [2024-07-14 02:02:48.887917] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:48.562 Initializing NVMe Controllers 00:15:48.562 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:48.562 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:48.562 Initialization complete. Launching workers. 00:15:48.562 ======================================================== 00:15:48.562 Latency(us) 00:15:48.562 Device Information : IOPS MiB/s Average min max 00:15:48.562 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31869.49 124.49 4016.43 1190.14 11542.51 00:15:48.562 ======================================================== 00:15:48.562 Total : 31869.49 124.49 4016.43 1190.14 11542.51 00:15:48.562 00:15:48.562 [2024-07-14 02:02:53.914682] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:48.562 02:02:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:48.562 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.562 [2024-07-14 02:02:54.132636] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:53.835 [2024-07-14 02:02:59.267036] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:53.835 Initializing NVMe Controllers 00:15:53.835 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:53.835 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:53.835 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:53.835 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:53.835 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:53.835 Initialization complete. Launching workers. 00:15:53.835 Starting thread on core 2 00:15:53.835 Starting thread on core 3 00:15:53.835 Starting thread on core 1 00:15:53.835 02:02:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:53.835 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.094 [2024-07-14 02:02:59.578366] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:57.381 [2024-07-14 02:03:02.644551] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:57.381 Initializing NVMe Controllers 00:15:57.381 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:57.381 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:57.381 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:57.381 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:57.381 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:57.381 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:57.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:57.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:57.381 Initialization complete. Launching workers. 00:15:57.381 Starting thread on core 1 with urgent priority queue 00:15:57.381 Starting thread on core 2 with urgent priority queue 00:15:57.381 Starting thread on core 3 with urgent priority queue 00:15:57.381 Starting thread on core 0 with urgent priority queue 00:15:57.381 SPDK bdev Controller (SPDK2 ) core 0: 5581.33 IO/s 17.92 secs/100000 ios 00:15:57.381 SPDK bdev Controller (SPDK2 ) core 1: 5548.67 IO/s 18.02 secs/100000 ios 00:15:57.381 SPDK bdev Controller (SPDK2 ) core 2: 5810.00 IO/s 17.21 secs/100000 ios 00:15:57.381 SPDK bdev Controller (SPDK2 ) core 3: 6223.67 IO/s 16.07 secs/100000 ios 00:15:57.381 ======================================================== 00:15:57.381 00:15:57.381 02:03:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:57.381 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.381 [2024-07-14 02:03:02.942420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:57.381 Initializing NVMe Controllers 00:15:57.381 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:57.381 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:57.381 Namespace ID: 1 size: 0GB 00:15:57.381 Initialization complete. 00:15:57.381 INFO: using host memory buffer for IO 00:15:57.381 Hello world! 00:15:57.381 [2024-07-14 02:03:02.951605] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:57.381 02:03:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:57.381 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.654 [2024-07-14 02:03:03.248604] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:59.032 Initializing NVMe Controllers 00:15:59.032 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:59.032 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:59.032 Initialization complete. Launching workers. 00:15:59.032 submit (in ns) avg, min, max = 8112.2, 3494.4, 4018263.3 00:15:59.032 complete (in ns) avg, min, max = 22847.2, 2048.9, 6012271.1 00:15:59.032 00:15:59.032 Submit histogram 00:15:59.032 ================ 00:15:59.032 Range in us Cumulative Count 00:15:59.032 3.484 - 3.508: 0.3136% ( 42) 00:15:59.032 3.508 - 3.532: 1.0826% ( 103) 00:15:59.032 3.532 - 3.556: 3.7330% ( 355) 00:15:59.032 3.556 - 3.579: 8.3396% ( 617) 00:15:59.032 3.579 - 3.603: 16.5895% ( 1105) 00:15:59.032 3.603 - 3.627: 25.3472% ( 1173) 00:15:59.032 3.627 - 3.650: 34.7544% ( 1260) 00:15:59.032 3.650 - 3.674: 41.7426% ( 936) 00:15:59.032 3.674 - 3.698: 47.9916% ( 837) 00:15:59.032 3.698 - 3.721: 53.8077% ( 779) 00:15:59.032 3.721 - 3.745: 58.5187% ( 631) 00:15:59.032 3.745 - 3.769: 62.4608% ( 528) 00:15:59.032 3.769 - 3.793: 66.0594% ( 482) 00:15:59.032 3.793 - 3.816: 69.4490% ( 454) 00:15:59.032 3.816 - 3.840: 73.0626% ( 484) 00:15:59.032 3.840 - 3.864: 76.8628% ( 509) 00:15:59.032 3.864 - 3.887: 80.4987% ( 487) 00:15:59.032 3.887 - 3.911: 83.4777% ( 399) 00:15:59.032 3.911 - 3.935: 85.7847% ( 309) 00:15:59.032 3.935 - 3.959: 87.7781% ( 267) 00:15:59.032 3.959 - 3.982: 89.3534% ( 211) 00:15:59.032 3.982 - 4.006: 90.9064% ( 208) 00:15:59.032 4.006 - 4.030: 91.9964% ( 146) 00:15:59.032 4.030 - 4.053: 93.2208% ( 164) 00:15:59.032 4.053 - 4.077: 94.0346% ( 109) 00:15:59.032 4.077 - 4.101: 94.6917% ( 88) 00:15:59.032 4.101 - 4.124: 95.1695% ( 64) 00:15:59.032 4.124 - 4.148: 95.5502% ( 51) 00:15:59.032 4.148 - 4.172: 95.8788% ( 44) 00:15:59.032 4.172 - 4.196: 96.0579% ( 24) 00:15:59.032 4.196 - 4.219: 96.1998% ( 19) 00:15:59.032 4.219 - 4.243: 96.3342% ( 18) 00:15:59.032 4.243 - 4.267: 96.4686% ( 18) 00:15:59.032 4.267 - 4.290: 96.6030% ( 18) 00:15:59.032 4.290 - 4.314: 96.6776% ( 10) 00:15:59.032 4.314 - 4.338: 96.7373% ( 8) 00:15:59.032 4.338 - 4.361: 96.7896% ( 7) 00:15:59.032 4.361 - 4.385: 96.8568% ( 9) 00:15:59.032 4.385 - 4.409: 96.9091% ( 7) 00:15:59.032 4.409 - 4.433: 96.9539% ( 6) 00:15:59.032 4.433 - 4.456: 96.9688% ( 2) 00:15:59.032 4.456 - 4.480: 96.9987% ( 4) 00:15:59.032 4.480 - 4.504: 97.0285% ( 4) 00:15:59.032 4.504 - 4.527: 97.0360% ( 1) 00:15:59.033 4.551 - 4.575: 97.0435% ( 1) 00:15:59.033 4.599 - 4.622: 97.0808% ( 5) 00:15:59.033 4.622 - 4.646: 97.0882% ( 1) 00:15:59.033 4.646 - 4.670: 97.1181% ( 4) 00:15:59.033 4.670 - 4.693: 97.1480% ( 4) 00:15:59.033 4.693 - 4.717: 97.1853% ( 5) 00:15:59.033 4.717 - 4.741: 97.2376% ( 7) 00:15:59.033 4.741 - 4.764: 97.2525% ( 2) 00:15:59.033 4.764 - 4.788: 97.2824% ( 4) 00:15:59.033 4.788 - 4.812: 97.3496% ( 9) 00:15:59.033 4.812 - 4.836: 97.4018% ( 7) 00:15:59.033 4.836 - 4.859: 97.4466% ( 6) 00:15:59.033 4.859 - 4.883: 97.4839% ( 5) 00:15:59.033 4.883 - 4.907: 97.5138% ( 4) 00:15:59.033 4.907 - 4.930: 97.5437% ( 4) 00:15:59.033 4.930 - 4.954: 97.6034% ( 8) 00:15:59.033 4.954 - 4.978: 97.6855% ( 11) 00:15:59.033 4.978 - 5.001: 97.7677% ( 11) 00:15:59.033 5.001 - 5.025: 97.8274% ( 8) 00:15:59.033 5.025 - 5.049: 97.8722% ( 6) 00:15:59.033 5.049 - 5.073: 97.8796% ( 1) 00:15:59.033 5.073 - 5.096: 97.9170% ( 5) 00:15:59.033 5.096 - 5.120: 97.9319% ( 2) 00:15:59.033 5.120 - 5.144: 97.9394% ( 1) 00:15:59.033 5.144 - 5.167: 97.9543% ( 2) 00:15:59.033 5.191 - 5.215: 97.9692% ( 2) 00:15:59.033 5.215 - 5.239: 97.9842% ( 2) 00:15:59.033 5.239 - 5.262: 98.0066% ( 3) 00:15:59.033 5.262 - 5.286: 98.0215% ( 2) 00:15:59.033 5.286 - 5.310: 98.0364% ( 2) 00:15:59.033 5.333 - 5.357: 98.0439% ( 1) 00:15:59.033 5.357 - 5.381: 98.0514% ( 1) 00:15:59.033 5.404 - 5.428: 98.0663% ( 2) 00:15:59.033 5.428 - 5.452: 98.0812% ( 2) 00:15:59.033 5.452 - 5.476: 98.0962% ( 2) 00:15:59.033 5.476 - 5.499: 98.1111% ( 2) 00:15:59.033 5.547 - 5.570: 98.1260% ( 2) 00:15:59.033 5.570 - 5.594: 98.1484% ( 3) 00:15:59.033 5.618 - 5.641: 98.1559% ( 1) 00:15:59.033 5.641 - 5.665: 98.1634% ( 1) 00:15:59.033 5.665 - 5.689: 98.1708% ( 1) 00:15:59.033 5.689 - 5.713: 98.1783% ( 1) 00:15:59.033 5.713 - 5.736: 98.1858% ( 1) 00:15:59.033 5.736 - 5.760: 98.2007% ( 2) 00:15:59.033 5.760 - 5.784: 98.2156% ( 2) 00:15:59.033 5.784 - 5.807: 98.2231% ( 1) 00:15:59.033 5.807 - 5.831: 98.2380% ( 2) 00:15:59.033 5.879 - 5.902: 98.2455% ( 1) 00:15:59.033 5.950 - 5.973: 98.2529% ( 1) 00:15:59.033 5.997 - 6.021: 98.2604% ( 1) 00:15:59.033 6.068 - 6.116: 98.2679% ( 1) 00:15:59.033 6.116 - 6.163: 98.2828% ( 2) 00:15:59.033 6.258 - 6.305: 98.2977% ( 2) 00:15:59.033 6.305 - 6.353: 98.3127% ( 2) 00:15:59.033 6.779 - 6.827: 98.3201% ( 1) 00:15:59.033 6.874 - 6.921: 98.3276% ( 1) 00:15:59.033 6.921 - 6.969: 98.3351% ( 1) 00:15:59.033 6.969 - 7.016: 98.3500% ( 2) 00:15:59.033 7.064 - 7.111: 98.3649% ( 2) 00:15:59.033 7.111 - 7.159: 98.3724% ( 1) 00:15:59.033 7.301 - 7.348: 98.3799% ( 1) 00:15:59.033 7.348 - 7.396: 98.4023% ( 3) 00:15:59.033 7.396 - 7.443: 98.4172% ( 2) 00:15:59.033 7.490 - 7.538: 98.4247% ( 1) 00:15:59.033 7.585 - 7.633: 98.4396% ( 2) 00:15:59.033 7.633 - 7.680: 98.4471% ( 1) 00:15:59.033 7.680 - 7.727: 98.4620% ( 2) 00:15:59.033 7.727 - 7.775: 98.4769% ( 2) 00:15:59.033 7.775 - 7.822: 98.5068% ( 4) 00:15:59.033 7.822 - 7.870: 98.5143% ( 1) 00:15:59.033 7.870 - 7.917: 98.5367% ( 3) 00:15:59.033 7.964 - 8.012: 98.5441% ( 1) 00:15:59.033 8.012 - 8.059: 98.5516% ( 1) 00:15:59.033 8.154 - 8.201: 98.5815% ( 4) 00:15:59.033 8.201 - 8.249: 98.5889% ( 1) 00:15:59.033 8.249 - 8.296: 98.5964% ( 1) 00:15:59.033 8.296 - 8.344: 98.6039% ( 1) 00:15:59.033 8.486 - 8.533: 98.6263% ( 3) 00:15:59.033 8.723 - 8.770: 98.6486% ( 3) 00:15:59.033 8.865 - 8.913: 98.6636% ( 2) 00:15:59.033 8.960 - 9.007: 98.6710% ( 1) 00:15:59.033 9.150 - 9.197: 98.6860% ( 2) 00:15:59.033 9.244 - 9.292: 98.6934% ( 1) 00:15:59.033 9.292 - 9.339: 98.7009% ( 1) 00:15:59.033 9.387 - 9.434: 98.7084% ( 1) 00:15:59.033 9.481 - 9.529: 98.7158% ( 1) 00:15:59.033 9.766 - 9.813: 98.7233% ( 1) 00:15:59.033 10.145 - 10.193: 98.7308% ( 1) 00:15:59.033 10.335 - 10.382: 98.7382% ( 1) 00:15:59.033 10.382 - 10.430: 98.7532% ( 2) 00:15:59.033 10.667 - 10.714: 98.7606% ( 1) 00:15:59.033 10.761 - 10.809: 98.7681% ( 1) 00:15:59.033 10.856 - 10.904: 98.7756% ( 1) 00:15:59.033 10.999 - 11.046: 98.7830% ( 1) 00:15:59.033 11.188 - 11.236: 98.7905% ( 1) 00:15:59.033 11.236 - 11.283: 98.7980% ( 1) 00:15:59.033 11.283 - 11.330: 98.8054% ( 1) 00:15:59.033 11.330 - 11.378: 98.8204% ( 2) 00:15:59.033 11.378 - 11.425: 98.8353% ( 2) 00:15:59.033 11.520 - 11.567: 98.8502% ( 2) 00:15:59.033 11.615 - 11.662: 98.8577% ( 1) 00:15:59.033 11.662 - 11.710: 98.8652% ( 1) 00:15:59.033 11.710 - 11.757: 98.8726% ( 1) 00:15:59.033 12.041 - 12.089: 98.8801% ( 1) 00:15:59.033 12.136 - 12.231: 98.8876% ( 1) 00:15:59.033 12.231 - 12.326: 98.8950% ( 1) 00:15:59.033 12.610 - 12.705: 98.9025% ( 1) 00:15:59.033 12.895 - 12.990: 98.9174% ( 2) 00:15:59.033 13.179 - 13.274: 98.9249% ( 1) 00:15:59.033 13.274 - 13.369: 98.9324% ( 1) 00:15:59.033 13.369 - 13.464: 98.9398% ( 1) 00:15:59.033 13.464 - 13.559: 98.9548% ( 2) 00:15:59.033 13.559 - 13.653: 98.9622% ( 1) 00:15:59.033 14.033 - 14.127: 98.9772% ( 2) 00:15:59.033 14.127 - 14.222: 98.9846% ( 1) 00:15:59.033 14.222 - 14.317: 98.9921% ( 1) 00:15:59.033 14.507 - 14.601: 98.9996% ( 1) 00:15:59.033 15.170 - 15.265: 99.0145% ( 2) 00:15:59.033 15.455 - 15.550: 99.0220% ( 1) 00:15:59.033 17.256 - 17.351: 99.0294% ( 1) 00:15:59.033 17.351 - 17.446: 99.0593% ( 4) 00:15:59.033 17.446 - 17.541: 99.1041% ( 6) 00:15:59.033 17.541 - 17.636: 99.1265% ( 3) 00:15:59.033 17.636 - 17.730: 99.1862% ( 8) 00:15:59.033 17.730 - 17.825: 99.2086% ( 3) 00:15:59.033 17.825 - 17.920: 99.2683% ( 8) 00:15:59.033 17.920 - 18.015: 99.3131% ( 6) 00:15:59.033 18.015 - 18.110: 99.3729% ( 8) 00:15:59.033 18.110 - 18.204: 99.4475% ( 10) 00:15:59.033 18.204 - 18.299: 99.5222% ( 10) 00:15:59.033 18.299 - 18.394: 99.5744% ( 7) 00:15:59.033 18.394 - 18.489: 99.6566% ( 11) 00:15:59.033 18.489 - 18.584: 99.6715% ( 2) 00:15:59.033 18.584 - 18.679: 99.7088% ( 5) 00:15:59.033 18.679 - 18.773: 99.7686% ( 8) 00:15:59.033 18.773 - 18.868: 99.7910% ( 3) 00:15:59.033 18.963 - 19.058: 99.8059% ( 2) 00:15:59.033 19.058 - 19.153: 99.8133% ( 1) 00:15:59.033 19.247 - 19.342: 99.8208% ( 1) 00:15:59.033 19.721 - 19.816: 99.8432% ( 3) 00:15:59.033 20.006 - 20.101: 99.8507% ( 1) 00:15:59.033 20.101 - 20.196: 99.8581% ( 1) 00:15:59.033 22.566 - 22.661: 99.8731% ( 2) 00:15:59.033 24.652 - 24.841: 99.8805% ( 1) 00:15:59.033 25.979 - 26.169: 99.8955% ( 2) 00:15:59.033 3980.705 - 4004.978: 99.9776% ( 11) 00:15:59.033 4004.978 - 4029.250: 100.0000% ( 3) 00:15:59.033 00:15:59.033 Complete histogram 00:15:59.033 ================== 00:15:59.033 Range in us Cumulative Count 00:15:59.033 2.039 - 2.050: 0.0075% ( 1) 00:15:59.033 2.050 - 2.062: 21.0243% ( 2815) 00:15:59.033 2.062 - 2.074: 42.1383% ( 2828) 00:15:59.033 2.074 - 2.086: 44.2213% ( 279) 00:15:59.033 2.086 - 2.098: 56.1296% ( 1595) 00:15:59.033 2.098 - 2.110: 61.1468% ( 672) 00:15:59.033 2.110 - 2.121: 63.2298% ( 279) 00:15:59.033 2.121 - 2.133: 73.8838% ( 1427) 00:15:59.033 2.133 - 2.145: 77.6168% ( 500) 00:15:59.033 2.145 - 2.157: 78.7517% ( 152) 00:15:59.033 2.157 - 2.169: 82.4250% ( 492) 00:15:59.033 2.169 - 2.181: 84.0526% ( 218) 00:15:59.033 2.181 - 2.193: 84.9037% ( 114) 00:15:59.033 2.193 - 2.204: 88.0096% ( 416) 00:15:59.033 2.204 - 2.216: 90.2792% ( 304) 00:15:59.033 2.216 - 2.228: 91.8919% ( 216) 00:15:59.033 2.228 - 2.240: 93.1686% ( 171) 00:15:59.033 2.240 - 2.252: 93.6613% ( 66) 00:15:59.033 2.252 - 2.264: 93.9600% ( 40) 00:15:59.033 2.264 - 2.276: 94.2362% ( 37) 00:15:59.033 2.276 - 2.287: 94.8634% ( 84) 00:15:59.033 2.287 - 2.299: 95.2217% ( 48) 00:15:59.033 2.299 - 2.311: 95.3039% ( 11) 00:15:59.033 2.311 - 2.323: 95.3860% ( 11) 00:15:59.033 2.323 - 2.335: 95.4532% ( 9) 00:15:59.033 2.335 - 2.347: 95.6324% ( 24) 00:15:59.033 2.347 - 2.359: 95.9609% ( 44) 00:15:59.033 2.359 - 2.370: 96.3640% ( 54) 00:15:59.033 2.370 - 2.382: 96.6627% ( 40) 00:15:59.033 2.382 - 2.394: 96.8792% ( 29) 00:15:59.033 2.394 - 2.406: 97.1032% ( 30) 00:15:59.033 2.406 - 2.418: 97.2376% ( 18) 00:15:59.033 2.418 - 2.430: 97.4914% ( 34) 00:15:59.033 2.430 - 2.441: 97.5959% ( 14) 00:15:59.033 2.441 - 2.453: 97.6557% ( 8) 00:15:59.033 2.453 - 2.465: 97.7079% ( 7) 00:15:59.033 2.465 - 2.477: 97.7975% ( 12) 00:15:59.033 2.477 - 2.489: 97.8423% ( 6) 00:15:59.033 2.489 - 2.501: 97.8647% ( 3) 00:15:59.033 2.501 - 2.513: 97.8796% ( 2) 00:15:59.033 2.513 - 2.524: 97.9020% ( 3) 00:15:59.033 2.524 - 2.536: 97.9244% ( 3) 00:15:59.033 2.536 - 2.548: 97.9618% ( 5) 00:15:59.033 2.548 - 2.560: 97.9692% ( 1) 00:15:59.033 2.560 - 2.572: 97.9767% ( 1) 00:15:59.033 2.572 - 2.584: 97.9842% ( 1) 00:15:59.033 2.584 - 2.596: 97.9916% ( 1) 00:15:59.033 2.596 - 2.607: 97.9991% ( 1) 00:15:59.033 2.607 - 2.619: 98.0066% ( 1) 00:15:59.033 2.619 - 2.631: 98.0140% ( 1) 00:15:59.033 2.631 - 2.643: 98.0215% ( 1) 00:15:59.033 2.667 - 2.679: 98.0290% ( 1) 00:15:59.033 2.690 - 2.702: 98.0364% ( 1) 00:15:59.033 2.702 - 2.714: 98.0439% ( 1) 00:15:59.033 2.714 - 2.726: 98.0588% ( 2) 00:15:59.033 2.726 - 2.738: 98.0663% ( 1) 00:15:59.033 2.738 - 2.750: 98.0812% ( 2) 00:15:59.033 2.761 - 2.773: 98.0887% ( 1) 00:15:59.033 2.773 - 2.785: 98.1036% ( 2) 00:15:59.033 2.785 - 2.797: 98.1111% ( 1) 00:15:59.033 2.797 - 2.809: 98.1260% ( 2) 00:15:59.033 2.821 - 2.833: 98.1335% ( 1) 00:15:59.033 2.844 - 2.856: 98.1484% ( 2) 00:15:59.033 2.904 - 2.916: 98.1634% ( 2) 00:15:59.033 2.916 - 2.927: 98.1708% ( 1) 00:15:59.033 2.987 - 2.999: 98.1783% ( 1) 00:15:59.033 3.010 - 3.022: 98.1858% ( 1) 00:15:59.033 3.022 - 3.034: 98.1932% ( 1) 00:15:59.033 3.058 - 3.081: 98.2082% ( 2) 00:15:59.033 3.081 - 3.105: 98.2231% ( 2) 00:15:59.033 3.153 - 3.176: 98.2455% ( 3) 00:15:59.033 3.176 - 3.200: 98.2529% ( 1) 00:15:59.033 3.200 - 3.224: 98.2604% ( 1) 00:15:59.033 3.247 - 3.271: 98.2753% ( 2) 00:15:59.033 3.295 - 3.319: 98.2828% ( 1) 00:15:59.033 3.342 - 3.366: 98.3052% ( 3) 00:15:59.033 3.366 - 3.390: 98.3425% ( 5) 00:15:59.033 3.390 - 3.413: 98.3575% ( 2) 00:15:59.033 3.413 - 3.437: 98.3649% ( 1) 00:15:59.033 3.437 - 3.461: 98.3948% ( 4) 00:15:59.033 3.461 - 3.484: 98.4321% ( 5) 00:15:59.033 3.484 - 3.508: 98.4695% ( 5) 00:15:59.033 3.508 - 3.532: 98.4919% ( 3) 00:15:59.033 3.532 - 3.556: 98.5217% ( 4) 00:15:59.033 3.556 - 3.579: 98.5516% ( 4) 00:15:59.033 3.579 - 3.603: 98.5665% ( 2) 00:15:59.033 3.603 - 3.627: 98.5740% ( 1) 00:15:59.033 3.627 - 3.650: 98.5815% ( 1) 00:15:59.033 3.650 - 3.674: 98.6039% ( 3) 00:15:59.033 3.674 - 3.698: 98.6263% ( 3) 00:15:59.033 3.698 - 3.721: 98.6337% ( 1) 00:15:59.033 3.721 - 3.745: 98.6412% ( 1) 00:15:59.033 3.745 - 3.769: 98.6486% ( 1) 00:15:59.033 3.769 - 3.793: 98.6710% ( 3) 00:15:59.033 3.793 - 3.816: 98.6860% ( 2) 00:15:59.033 3.816 - 3.840: 98.6934% ( 1) 00:15:59.033 3.911 - 3.935: 98.7009% ( 1) 00:15:59.033 3.935 - 3.959: 98.7084% ( 1) 00:15:59.033 3.982 - 4.006: 98.7233% ( 2) 00:15:59.033 4.077 - 4.101: 98.7308% ( 1) 00:15:59.033 4.290 - 4.314: 98.7382% ( 1) 00:15:59.033 5.381 - 5.404: 98.7457% ( 1) 00:15:59.033 5.476 - 5.499: 98.7532% ( 1) 00:15:59.033 5.713 - 5.736: 98.7606% ( 1) 00:15:59.033 5.855 - 5.879: 98.7681% ( 1) 00:15:59.033 5.879 - 5.902: 98.7756% ( 1) 00:15:59.033 6.116 - 6.163: 98.7830% ( 1) 00:15:59.033 6.210 - 6.258: 98.7905% ( 1) 00:15:59.033 6.258 - 6.305: 98.7980% ( 1) 00:15:59.034 6.400 - 6.447: 98.8054% ( 1) 00:15:59.034 6.542 - 6.590: 98.8129% ( 1) 00:15:59.034 7.206 - 7.253: 98.8204% ( 1) 00:15:59.034 7.490 - 7.538: 98.8278% ( 1) 00:15:59.034 7.585 - 7.633: 98.8353% ( 1) 00:15:59.034 7.964 - 8.012: 98.8428% ( 1) 00:15:59.034 8.012 - 8.059: 98.8502% ( 1) 00:15:59.034 8.296 - 8.344: 98.8577% ( 1) 00:15:59.034 9.102 - 9.150: 98.8652% ( 1) 00:15:59.034 9.861 - 9.908: 98.8726% ( 1) 00:15:59.034 10.809 - 10.856: 98.8801% ( 1) 00:15:59.034 15.550 - 15.644: 98.9100% ( 4) 00:15:59.034 15.644 - 15.739: 98.9249% ( 2) 00:15:59.034 15.739 - 15.834: 98.9398% ( 2) 00:15:59.034 15.834 - 15.929: 98.9772% ( 5) 00:15:59.034 15.929 - 16.024: 99.0220% ( 6) 00:15:59.034 16.024 - 16.119: 99.0443% ( 3) 00:15:59.034 16.119 - 16.213: 99.0518% ( 1) 00:15:59.034 16.213 - 16.308: 99.0966% ( 6) 00:15:59.034 16.308 - 16.403: 99.1190% ( 3) 00:15:59.034 16.403 - 16.498: 99.1713% ( 7) 00:15:59.034 16.498 - 16.593: 99.2086% ( 5) 00:15:59.034 16.593 - 16.687: 99.2235% ( 2) 00:15:59.034 16.687 - 16.782: 99.2833% ( 8) 00:15:59.034 16.782 - 16.877: 99.3505% ( 9) 00:15:59.034 16.972 - 17.067: 99.3729% ( 3) 00:15:59.034 17.067 - 17.161: 99.3878% ( 2) 00:15:59.034 17.161 - 17.256: 99.4102% ( 3) 00:15:59.034 17.541 - 17.636: 99.4176% ( 1) 00:15:59.034 17.730 - 17.825: 99.4251% ( 1) 00:15:59.034 18.015 - 18.110: 99.4400% ( 2) 00:15:59.034 18.110 - 18.204: 99.4475% ( 1) 00:15:59.034 18.204 - 18.299: 99.4550% ( 1) 00:15:59.034 18.394 - 18.489: 99.4699% ( 2) 00:15:59.034 18.773 - 18.868: 99.4774% ( 1) 00:15:59.034 20.006 - 20.101: 99.4848% ( 1) 00:15:59.034 1201.493 - 1207.561: 99.4923% ( 1) 00:15:59.034 3980.705 - 4004.978: 99.8432% ( 47) 00:15:59.034 4004.978 - 4029.250: 99.9851%[2024-07-14 02:03:04.353793] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:59.034 ( 19) 00:15:59.034 4975.881 - 5000.154: 99.9925% ( 1) 00:15:59.034 5995.330 - 6019.603: 100.0000% ( 1) 00:15:59.034 00:15:59.034 02:03:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:59.034 02:03:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:59.034 02:03:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:59.034 02:03:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:59.034 02:03:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:59.034 [ 00:15:59.034 { 00:15:59.034 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:59.034 "subtype": "Discovery", 00:15:59.034 "listen_addresses": [], 00:15:59.034 "allow_any_host": true, 00:15:59.034 "hosts": [] 00:15:59.034 }, 00:15:59.034 { 00:15:59.034 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:59.034 "subtype": "NVMe", 00:15:59.034 "listen_addresses": [ 00:15:59.034 { 00:15:59.034 "trtype": "VFIOUSER", 00:15:59.034 "adrfam": "IPv4", 00:15:59.034 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:59.034 "trsvcid": "0" 00:15:59.034 } 00:15:59.034 ], 00:15:59.034 "allow_any_host": true, 00:15:59.034 "hosts": [], 00:15:59.034 "serial_number": "SPDK1", 00:15:59.034 "model_number": "SPDK bdev Controller", 00:15:59.034 "max_namespaces": 32, 00:15:59.034 "min_cntlid": 1, 00:15:59.034 "max_cntlid": 65519, 00:15:59.034 "namespaces": [ 00:15:59.034 { 00:15:59.034 "nsid": 1, 00:15:59.034 "bdev_name": "Malloc1", 00:15:59.034 "name": "Malloc1", 00:15:59.034 "nguid": "8FCB4F7806D24299BB0AC7F2B289A8A0", 00:15:59.034 "uuid": "8fcb4f78-06d2-4299-bb0a-c7f2b289a8a0" 00:15:59.034 }, 00:15:59.034 { 00:15:59.034 "nsid": 2, 00:15:59.034 "bdev_name": "Malloc3", 00:15:59.034 "name": "Malloc3", 00:15:59.034 "nguid": "902842F9C52745CEA5E83BAAE3ADAA6F", 00:15:59.034 "uuid": "902842f9-c527-45ce-a5e8-3baae3adaa6f" 00:15:59.034 } 00:15:59.034 ] 00:15:59.034 }, 00:15:59.034 { 00:15:59.034 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:59.034 "subtype": "NVMe", 00:15:59.034 "listen_addresses": [ 00:15:59.034 { 00:15:59.034 "trtype": "VFIOUSER", 00:15:59.034 "adrfam": "IPv4", 00:15:59.034 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:59.034 "trsvcid": "0" 00:15:59.034 } 00:15:59.034 ], 00:15:59.034 "allow_any_host": true, 00:15:59.034 "hosts": [], 00:15:59.034 "serial_number": "SPDK2", 00:15:59.034 "model_number": "SPDK bdev Controller", 00:15:59.034 "max_namespaces": 32, 00:15:59.034 "min_cntlid": 1, 00:15:59.034 "max_cntlid": 65519, 00:15:59.034 "namespaces": [ 00:15:59.034 { 00:15:59.034 "nsid": 1, 00:15:59.034 "bdev_name": "Malloc2", 00:15:59.034 "name": "Malloc2", 00:15:59.034 "nguid": "E18D3847876144848DD01469DC3A808B", 00:15:59.034 "uuid": "e18d3847-8761-4484-8dd0-1469dc3a808b" 00:15:59.034 } 00:15:59.034 ] 00:15:59.034 } 00:15:59.034 ] 00:15:59.034 02:03:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:59.034 02:03:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1554350 00:15:59.034 02:03:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:59.034 02:03:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:59.034 02:03:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:59.034 02:03:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:59.034 02:03:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:59.034 02:03:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:59.034 02:03:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:59.034 02:03:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:59.292 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.292 [2024-07-14 02:03:04.837359] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:59.292 Malloc4 00:15:59.292 02:03:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:59.549 [2024-07-14 02:03:05.199113] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:59.549 02:03:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:59.806 Asynchronous Event Request test 00:15:59.806 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:59.806 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:59.806 Registering asynchronous event callbacks... 00:15:59.806 Starting namespace attribute notice tests for all controllers... 00:15:59.806 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:59.806 aer_cb - Changed Namespace 00:15:59.806 Cleaning up... 00:15:59.806 [ 00:15:59.806 { 00:15:59.806 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:59.806 "subtype": "Discovery", 00:15:59.806 "listen_addresses": [], 00:15:59.806 "allow_any_host": true, 00:15:59.806 "hosts": [] 00:15:59.806 }, 00:15:59.806 { 00:15:59.806 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:59.806 "subtype": "NVMe", 00:15:59.806 "listen_addresses": [ 00:15:59.806 { 00:15:59.806 "trtype": "VFIOUSER", 00:15:59.806 "adrfam": "IPv4", 00:15:59.806 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:59.806 "trsvcid": "0" 00:15:59.806 } 00:15:59.806 ], 00:15:59.806 "allow_any_host": true, 00:15:59.806 "hosts": [], 00:15:59.806 "serial_number": "SPDK1", 00:15:59.806 "model_number": "SPDK bdev Controller", 00:15:59.806 "max_namespaces": 32, 00:15:59.806 "min_cntlid": 1, 00:15:59.806 "max_cntlid": 65519, 00:15:59.806 "namespaces": [ 00:15:59.806 { 00:15:59.806 "nsid": 1, 00:15:59.807 "bdev_name": "Malloc1", 00:15:59.807 "name": "Malloc1", 00:15:59.807 "nguid": "8FCB4F7806D24299BB0AC7F2B289A8A0", 00:15:59.807 "uuid": "8fcb4f78-06d2-4299-bb0a-c7f2b289a8a0" 00:15:59.807 }, 00:15:59.807 { 00:15:59.807 "nsid": 2, 00:15:59.807 "bdev_name": "Malloc3", 00:15:59.807 "name": "Malloc3", 00:15:59.807 "nguid": "902842F9C52745CEA5E83BAAE3ADAA6F", 00:15:59.807 "uuid": "902842f9-c527-45ce-a5e8-3baae3adaa6f" 00:15:59.807 } 00:15:59.807 ] 00:15:59.807 }, 00:15:59.807 { 00:15:59.807 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:59.807 "subtype": "NVMe", 00:15:59.807 "listen_addresses": [ 00:15:59.807 { 00:15:59.807 "trtype": "VFIOUSER", 00:15:59.807 "adrfam": "IPv4", 00:15:59.807 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:59.807 "trsvcid": "0" 00:15:59.807 } 00:15:59.807 ], 00:15:59.807 "allow_any_host": true, 00:15:59.807 "hosts": [], 00:15:59.807 "serial_number": "SPDK2", 00:15:59.807 "model_number": "SPDK bdev Controller", 00:15:59.807 "max_namespaces": 32, 00:15:59.807 "min_cntlid": 1, 00:15:59.807 "max_cntlid": 65519, 00:15:59.807 "namespaces": [ 00:15:59.807 { 00:15:59.807 "nsid": 1, 00:15:59.807 "bdev_name": "Malloc2", 00:15:59.807 "name": "Malloc2", 00:15:59.807 "nguid": "E18D3847876144848DD01469DC3A808B", 00:15:59.807 "uuid": "e18d3847-8761-4484-8dd0-1469dc3a808b" 00:15:59.807 }, 00:15:59.807 { 00:15:59.807 "nsid": 2, 00:15:59.807 "bdev_name": "Malloc4", 00:15:59.807 "name": "Malloc4", 00:15:59.807 "nguid": "CD79FB585E2944C19A822160FE5F47CE", 00:15:59.807 "uuid": "cd79fb58-5e29-44c1-9a82-2160fe5f47ce" 00:15:59.807 } 00:15:59.807 ] 00:15:59.807 } 00:15:59.807 ] 00:15:59.807 02:03:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1554350 00:15:59.807 02:03:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:59.807 02:03:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1548646 00:15:59.807 02:03:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1548646 ']' 00:15:59.807 02:03:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1548646 00:15:59.807 02:03:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:59.807 02:03:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.807 02:03:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1548646 00:16:00.065 02:03:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:00.065 02:03:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:00.065 02:03:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1548646' 00:16:00.065 killing process with pid 1548646 00:16:00.065 02:03:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1548646 00:16:00.065 02:03:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1548646 00:16:00.325 02:03:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:00.325 02:03:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:00.325 02:03:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:00.325 02:03:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:00.325 02:03:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:00.325 02:03:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1554490 00:16:00.325 02:03:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:00.325 02:03:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1554490' 00:16:00.325 Process pid: 1554490 00:16:00.325 02:03:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:00.325 02:03:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1554490 00:16:00.325 02:03:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1554490 ']' 00:16:00.325 02:03:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.325 02:03:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.325 02:03:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.325 02:03:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.325 02:03:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:00.325 [2024-07-14 02:03:05.866729] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:00.325 [2024-07-14 02:03:05.867769] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:00.325 [2024-07-14 02:03:05.867844] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.325 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.325 [2024-07-14 02:03:05.934709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:00.585 [2024-07-14 02:03:06.032088] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.585 [2024-07-14 02:03:06.032166] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.585 [2024-07-14 02:03:06.032182] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.585 [2024-07-14 02:03:06.032196] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.585 [2024-07-14 02:03:06.032208] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.585 [2024-07-14 02:03:06.032271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.585 [2024-07-14 02:03:06.032323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.585 [2024-07-14 02:03:06.032443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:00.585 [2024-07-14 02:03:06.032445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.585 [2024-07-14 02:03:06.138649] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:00.585 [2024-07-14 02:03:06.138828] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:00.585 [2024-07-14 02:03:06.139185] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:00.585 [2024-07-14 02:03:06.139765] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:00.585 [2024-07-14 02:03:06.140014] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:00.585 02:03:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.585 02:03:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:16:00.585 02:03:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:01.519 02:03:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:01.777 02:03:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:01.777 02:03:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:01.777 02:03:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:01.777 02:03:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:01.777 02:03:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:02.049 Malloc1 00:16:02.049 02:03:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:02.309 02:03:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:02.567 02:03:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:02.825 02:03:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:02.825 02:03:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:02.825 02:03:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:03.083 Malloc2 00:16:03.083 02:03:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:03.340 02:03:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:03.598 02:03:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:03.855 02:03:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:03.855 02:03:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1554490 00:16:03.855 02:03:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1554490 ']' 00:16:03.855 02:03:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1554490 00:16:03.855 02:03:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:03.855 02:03:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:03.855 02:03:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1554490 00:16:04.149 02:03:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:04.149 02:03:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:04.149 02:03:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1554490' 00:16:04.149 killing process with pid 1554490 00:16:04.149 02:03:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1554490 00:16:04.149 02:03:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1554490 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:04.408 00:16:04.408 real 0m52.532s 00:16:04.408 user 3m27.539s 00:16:04.408 sys 0m4.372s 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:04.408 ************************************ 00:16:04.408 END TEST nvmf_vfio_user 00:16:04.408 ************************************ 00:16:04.408 02:03:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:04.408 02:03:09 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:04.408 02:03:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:04.408 02:03:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.408 02:03:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:04.408 ************************************ 00:16:04.408 START TEST nvmf_vfio_user_nvme_compliance 00:16:04.408 ************************************ 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:04.408 * Looking for test storage... 00:16:04.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.408 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1555478 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1555478' 00:16:04.409 Process pid: 1555478 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1555478 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1555478 ']' 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.409 02:03:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:04.409 [2024-07-14 02:03:10.017714] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:04.409 [2024-07-14 02:03:10.017808] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.409 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.409 [2024-07-14 02:03:10.078658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:04.668 [2024-07-14 02:03:10.175184] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.668 [2024-07-14 02:03:10.175243] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.668 [2024-07-14 02:03:10.175262] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.668 [2024-07-14 02:03:10.175273] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.668 [2024-07-14 02:03:10.175282] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.668 [2024-07-14 02:03:10.175374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.668 [2024-07-14 02:03:10.175450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.668 [2024-07-14 02:03:10.175467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.668 02:03:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.668 02:03:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:16:04.668 02:03:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:05.605 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:05.605 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:05.605 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:05.605 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.605 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:05.605 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.605 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:05.605 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:05.605 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.605 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:05.865 malloc0 00:16:05.865 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.865 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:05.865 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.865 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:05.865 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.865 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:05.865 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.865 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:05.865 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.865 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:05.865 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.865 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:05.865 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.865 02:03:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:05.865 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.865 00:16:05.865 00:16:05.865 CUnit - A unit testing framework for C - Version 2.1-3 00:16:05.865 http://cunit.sourceforge.net/ 00:16:05.865 00:16:05.865 00:16:05.865 Suite: nvme_compliance 00:16:05.865 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-14 02:03:11.512413] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.865 [2024-07-14 02:03:11.513907] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:05.865 [2024-07-14 02:03:11.513934] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:05.865 [2024-07-14 02:03:11.513948] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:05.865 [2024-07-14 02:03:11.515430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.865 passed 00:16:06.125 Test: admin_identify_ctrlr_verify_fused ...[2024-07-14 02:03:11.602056] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.125 [2024-07-14 02:03:11.605081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.125 passed 00:16:06.125 Test: admin_identify_ns ...[2024-07-14 02:03:11.692640] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.125 [2024-07-14 02:03:11.751888] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:06.125 [2024-07-14 02:03:11.759883] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:06.125 [2024-07-14 02:03:11.781010] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.385 passed 00:16:06.385 Test: admin_get_features_mandatory_features ...[2024-07-14 02:03:11.868725] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.385 [2024-07-14 02:03:11.871748] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.385 passed 00:16:06.385 Test: admin_get_features_optional_features ...[2024-07-14 02:03:11.957347] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.385 [2024-07-14 02:03:11.960367] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.385 passed 00:16:06.385 Test: admin_set_features_number_of_queues ...[2024-07-14 02:03:12.043382] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.644 [2024-07-14 02:03:12.148110] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.644 passed 00:16:06.644 Test: admin_get_log_page_mandatory_logs ...[2024-07-14 02:03:12.231756] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.644 [2024-07-14 02:03:12.234791] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.644 passed 00:16:06.644 Test: admin_get_log_page_with_lpo ...[2024-07-14 02:03:12.315986] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.902 [2024-07-14 02:03:12.383896] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:06.902 [2024-07-14 02:03:12.399992] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.902 passed 00:16:06.902 Test: fabric_property_get ...[2024-07-14 02:03:12.479559] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.902 [2024-07-14 02:03:12.480825] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:06.902 [2024-07-14 02:03:12.482583] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.902 passed 00:16:06.902 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-14 02:03:12.568110] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.902 [2024-07-14 02:03:12.569414] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:06.902 [2024-07-14 02:03:12.571135] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.161 passed 00:16:07.161 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-14 02:03:12.656414] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.161 [2024-07-14 02:03:12.736881] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:07.161 [2024-07-14 02:03:12.753890] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:07.161 [2024-07-14 02:03:12.758982] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.161 passed 00:16:07.161 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-14 02:03:12.838624] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.161 [2024-07-14 02:03:12.839926] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:07.161 [2024-07-14 02:03:12.841645] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.419 passed 00:16:07.419 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-14 02:03:12.927522] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.419 [2024-07-14 02:03:13.002889] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:07.419 [2024-07-14 02:03:13.026889] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:07.419 [2024-07-14 02:03:13.031985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.419 passed 00:16:07.679 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-14 02:03:13.115578] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.679 [2024-07-14 02:03:13.116860] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:07.679 [2024-07-14 02:03:13.116909] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:07.679 [2024-07-14 02:03:13.118600] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.679 passed 00:16:07.679 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-14 02:03:13.199828] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.679 [2024-07-14 02:03:13.292879] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:07.679 [2024-07-14 02:03:13.300893] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:07.679 [2024-07-14 02:03:13.308892] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:07.679 [2024-07-14 02:03:13.316888] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:07.679 [2024-07-14 02:03:13.346005] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.938 passed 00:16:07.938 Test: admin_create_io_sq_verify_pc ...[2024-07-14 02:03:13.429797] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.938 [2024-07-14 02:03:13.442888] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:07.938 [2024-07-14 02:03:13.460178] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.938 passed 00:16:07.938 Test: admin_create_io_qp_max_qps ...[2024-07-14 02:03:13.547758] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:09.315 [2024-07-14 02:03:14.651988] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:09.574 [2024-07-14 02:03:15.041115] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:09.574 passed 00:16:09.574 Test: admin_create_io_sq_shared_cq ...[2024-07-14 02:03:15.125393] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:09.574 [2024-07-14 02:03:15.256873] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:09.835 [2024-07-14 02:03:15.293950] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:09.835 passed 00:16:09.835 00:16:09.835 Run Summary: Type Total Ran Passed Failed Inactive 00:16:09.835 suites 1 1 n/a 0 0 00:16:09.835 tests 18 18 18 0 0 00:16:09.835 asserts 360 360 360 0 n/a 00:16:09.835 00:16:09.835 Elapsed time = 1.569 seconds 00:16:09.835 02:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1555478 00:16:09.835 02:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1555478 ']' 00:16:09.835 02:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1555478 00:16:09.835 02:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:16:09.835 02:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:09.835 02:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1555478 00:16:09.835 02:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:09.835 02:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:09.835 02:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1555478' 00:16:09.835 killing process with pid 1555478 00:16:09.835 02:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1555478 00:16:09.835 02:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1555478 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:10.094 00:16:10.094 real 0m5.734s 00:16:10.094 user 0m16.155s 00:16:10.094 sys 0m0.539s 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:10.094 ************************************ 00:16:10.094 END TEST nvmf_vfio_user_nvme_compliance 00:16:10.094 ************************************ 00:16:10.094 02:03:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:10.094 02:03:15 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:10.094 02:03:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:10.094 02:03:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.094 02:03:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:10.094 ************************************ 00:16:10.094 START TEST nvmf_vfio_user_fuzz 00:16:10.094 ************************************ 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:10.094 * Looking for test storage... 00:16:10.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.094 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1556203 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1556203' 00:16:10.095 Process pid: 1556203 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1556203 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1556203 ']' 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:10.095 02:03:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.689 02:03:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.689 02:03:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:16:10.689 02:03:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:11.623 malloc0 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:11.623 02:03:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:43.694 Fuzzing completed. Shutting down the fuzz application 00:16:43.694 00:16:43.694 Dumping successful admin opcodes: 00:16:43.694 8, 9, 10, 24, 00:16:43.694 Dumping successful io opcodes: 00:16:43.694 0, 00:16:43.694 NS: 0x200003a1ef00 I/O qp, Total commands completed: 601656, total successful commands: 2325, random_seed: 355844160 00:16:43.694 NS: 0x200003a1ef00 admin qp, Total commands completed: 132200, total successful commands: 1075, random_seed: 2553920896 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1556203 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1556203 ']' 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1556203 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1556203 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1556203' 00:16:43.694 killing process with pid 1556203 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1556203 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1556203 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:43.694 00:16:43.694 real 0m32.260s 00:16:43.694 user 0m31.194s 00:16:43.694 sys 0m30.523s 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:43.694 02:03:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:43.694 ************************************ 00:16:43.694 END TEST nvmf_vfio_user_fuzz 00:16:43.694 ************************************ 00:16:43.694 02:03:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:43.694 02:03:47 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:43.694 02:03:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:43.694 02:03:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:43.694 02:03:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:43.694 ************************************ 00:16:43.694 START TEST nvmf_host_management 00:16:43.694 ************************************ 00:16:43.694 02:03:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:43.694 * Looking for test storage... 00:16:43.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.694 02:03:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.694 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:43.694 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.694 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.694 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.694 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.694 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.694 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.694 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.694 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.694 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.694 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.694 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.694 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.694 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:43.695 02:03:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:44.632 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:44.632 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:44.632 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:44.632 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:44.632 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:44.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:16:44.632 00:16:44.633 --- 10.0.0.2 ping statistics --- 00:16:44.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.633 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:44.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:16:44.633 00:16:44.633 --- 10.0.0.1 ping statistics --- 00:16:44.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.633 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1561645 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1561645 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1561645 ']' 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.633 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:44.633 [2024-07-14 02:03:50.248694] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:44.633 [2024-07-14 02:03:50.248794] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.633 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.633 [2024-07-14 02:03:50.321482] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:44.892 [2024-07-14 02:03:50.417773] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.892 [2024-07-14 02:03:50.417831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.892 [2024-07-14 02:03:50.417857] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.892 [2024-07-14 02:03:50.417881] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.892 [2024-07-14 02:03:50.417894] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.892 [2024-07-14 02:03:50.417994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.892 [2024-07-14 02:03:50.418048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.892 [2024-07-14 02:03:50.418103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:44.892 [2024-07-14 02:03:50.418106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.892 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.892 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:44.892 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:44.892 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:44.892 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:44.892 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.892 02:03:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:44.892 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.892 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:44.892 [2024-07-14 02:03:50.572681] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.892 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.892 02:03:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:44.892 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:44.892 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:45.151 Malloc0 00:16:45.151 [2024-07-14 02:03:50.633728] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1561810 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1561810 /var/tmp/bdevperf.sock 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1561810 ']' 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:45.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:45.151 { 00:16:45.151 "params": { 00:16:45.151 "name": "Nvme$subsystem", 00:16:45.151 "trtype": "$TEST_TRANSPORT", 00:16:45.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:45.151 "adrfam": "ipv4", 00:16:45.151 "trsvcid": "$NVMF_PORT", 00:16:45.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:45.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:45.151 "hdgst": ${hdgst:-false}, 00:16:45.151 "ddgst": ${ddgst:-false} 00:16:45.151 }, 00:16:45.151 "method": "bdev_nvme_attach_controller" 00:16:45.151 } 00:16:45.151 EOF 00:16:45.151 )") 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:45.151 02:03:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:45.151 "params": { 00:16:45.151 "name": "Nvme0", 00:16:45.151 "trtype": "tcp", 00:16:45.151 "traddr": "10.0.0.2", 00:16:45.151 "adrfam": "ipv4", 00:16:45.151 "trsvcid": "4420", 00:16:45.151 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:45.151 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:45.151 "hdgst": false, 00:16:45.151 "ddgst": false 00:16:45.151 }, 00:16:45.151 "method": "bdev_nvme_attach_controller" 00:16:45.151 }' 00:16:45.151 [2024-07-14 02:03:50.706420] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:45.151 [2024-07-14 02:03:50.706512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1561810 ] 00:16:45.151 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.151 [2024-07-14 02:03:50.769284] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.409 [2024-07-14 02:03:50.856591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.409 Running I/O for 10 seconds... 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=3 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:16:45.669 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:45.931 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:45.931 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:45.931 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:45.931 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:45.931 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.931 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:45.931 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.931 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=323 00:16:45.931 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:16:45.931 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:45.931 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:45.931 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:45.931 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:45.931 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.931 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:45.931 [2024-07-14 02:03:51.448502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408dd0 is same with the state(5) to be set 00:16:45.931 [2024-07-14 02:03:51.449271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.931 [2024-07-14 02:03:51.449315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.931 [2024-07-14 02:03:51.449346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.931 [2024-07-14 02:03:51.449363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.931 [2024-07-14 02:03:51.449381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.931 [2024-07-14 02:03:51.449395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.931 [2024-07-14 02:03:51.449410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.931 [2024-07-14 02:03:51.449424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.931 [2024-07-14 02:03:51.449439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.931 [2024-07-14 02:03:51.449453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.931 [2024-07-14 02:03:51.449469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.931 [2024-07-14 02:03:51.449482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.931 [2024-07-14 02:03:51.449497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.931 [2024-07-14 02:03:51.449510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.931 [2024-07-14 02:03:51.449526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.931 [2024-07-14 02:03:51.449539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.931 [2024-07-14 02:03:51.449554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.931 [2024-07-14 02:03:51.449568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.931 [2024-07-14 02:03:51.449583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.931 [2024-07-14 02:03:51.449602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.931 [2024-07-14 02:03:51.449619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.931 [2024-07-14 02:03:51.449633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.931 [2024-07-14 02:03:51.449649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.931 [2024-07-14 02:03:51.449662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.931 [2024-07-14 02:03:51.449677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.931 [2024-07-14 02:03:51.449691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.931 [2024-07-14 02:03:51.449706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.449720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.449735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.449748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.449763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.449777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.449792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.449806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.449822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.449836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.449851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.449872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.449890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.449905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.449926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.449939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.449954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.449968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.449987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.932 [2024-07-14 02:03:51.450863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.932 [2024-07-14 02:03:51.450887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.933 [2024-07-14 02:03:51.450902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.933 [2024-07-14 02:03:51.450930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.933 [2024-07-14 02:03:51.450944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.933 [2024-07-14 02:03:51.450959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.933 [2024-07-14 02:03:51.450973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.933 [2024-07-14 02:03:51.450988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.933 [2024-07-14 02:03:51.451002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.933 [2024-07-14 02:03:51.451017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.933 [2024-07-14 02:03:51.451030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.933 [2024-07-14 02:03:51.451046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.933 [2024-07-14 02:03:51.451060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.933 [2024-07-14 02:03:51.451075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.933 [2024-07-14 02:03:51.451089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.933 [2024-07-14 02:03:51.451104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.933 [2024-07-14 02:03:51.451118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.933 [2024-07-14 02:03:51.451134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.933 [2024-07-14 02:03:51.451147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.933 [2024-07-14 02:03:51.451163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.933 [2024-07-14 02:03:51.451191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.933 [2024-07-14 02:03:51.451210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.933 [2024-07-14 02:03:51.451225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.933 [2024-07-14 02:03:51.451239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.933 [2024-07-14 02:03:51.451252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.933 [2024-07-14 02:03:51.451267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.933 [2024-07-14 02:03:51.451280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.933 [2024-07-14 02:03:51.451295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.933 [2024-07-14 02:03:51.451308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.933 [2024-07-14 02:03:51.451400] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2395100 was disconnected and freed. reset controller. 00:16:45.933 [2024-07-14 02:03:51.452585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:45.933 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.933 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:45.933 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.933 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:45.933 task offset: 53632 on job bdev=Nvme0n1 fails 00:16:45.933 00:16:45.933 Latency(us) 00:16:45.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.933 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.933 Job: Nvme0n1 ended in about 0.38 seconds with error 00:16:45.933 Verification LBA range: start 0x0 length 0x400 00:16:45.933 Nvme0n1 : 0.38 1004.49 62.78 167.41 0.00 53112.06 2742.80 47962.64 00:16:45.933 =================================================================================================================== 00:16:45.933 Total : 1004.49 62.78 167.41 0.00 53112.06 2742.80 47962.64 00:16:45.933 [2024-07-14 02:03:51.454499] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:45.933 [2024-07-14 02:03:51.454529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f83ed0 (9): Bad file descriptor 00:16:45.933 [2024-07-14 02:03:51.456192] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:16:45.933 [2024-07-14 02:03:51.456442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:45.933 [2024-07-14 02:03:51.456473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.933 [2024-07-14 02:03:51.456503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:16:45.933 [2024-07-14 02:03:51.456520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:16:45.933 [2024-07-14 02:03:51.456535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:16:45.933 [2024-07-14 02:03:51.456553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f83ed0 00:16:45.933 [2024-07-14 02:03:51.456588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f83ed0 (9): Bad file descriptor 00:16:45.933 [2024-07-14 02:03:51.456614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:45.933 [2024-07-14 02:03:51.456629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:45.933 [2024-07-14 02:03:51.456647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:45.933 [2024-07-14 02:03:51.456680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:45.933 02:03:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.933 02:03:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:46.913 02:03:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1561810 00:16:46.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1561810) - No such process 00:16:46.913 02:03:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:46.913 02:03:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:46.913 02:03:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:46.913 02:03:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:46.913 02:03:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:46.913 02:03:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:46.913 02:03:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:46.913 02:03:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:46.913 { 00:16:46.913 "params": { 00:16:46.913 "name": "Nvme$subsystem", 00:16:46.913 "trtype": "$TEST_TRANSPORT", 00:16:46.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:46.913 "adrfam": "ipv4", 00:16:46.913 "trsvcid": "$NVMF_PORT", 00:16:46.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:46.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:46.913 "hdgst": ${hdgst:-false}, 00:16:46.913 "ddgst": ${ddgst:-false} 00:16:46.913 }, 00:16:46.913 "method": "bdev_nvme_attach_controller" 00:16:46.913 } 00:16:46.913 EOF 00:16:46.913 )") 00:16:46.913 02:03:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:46.913 02:03:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:46.913 02:03:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:46.913 02:03:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:46.913 "params": { 00:16:46.913 "name": "Nvme0", 00:16:46.913 "trtype": "tcp", 00:16:46.913 "traddr": "10.0.0.2", 00:16:46.913 "adrfam": "ipv4", 00:16:46.913 "trsvcid": "4420", 00:16:46.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:46.913 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:46.913 "hdgst": false, 00:16:46.913 "ddgst": false 00:16:46.913 }, 00:16:46.913 "method": "bdev_nvme_attach_controller" 00:16:46.913 }' 00:16:46.913 [2024-07-14 02:03:52.505076] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:46.913 [2024-07-14 02:03:52.505172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1561969 ] 00:16:46.913 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.913 [2024-07-14 02:03:52.566595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.173 [2024-07-14 02:03:52.653391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.433 Running I/O for 1 seconds... 00:16:48.366 00:16:48.366 Latency(us) 00:16:48.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.366 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:48.366 Verification LBA range: start 0x0 length 0x400 00:16:48.366 Nvme0n1 : 1.03 1123.23 70.20 0.00 0.00 56193.16 13301.38 46215.02 00:16:48.366 =================================================================================================================== 00:16:48.366 Total : 1123.23 70.20 0.00 0.00 56193.16 13301.38 46215.02 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:48.624 rmmod nvme_tcp 00:16:48.624 rmmod nvme_fabrics 00:16:48.624 rmmod nvme_keyring 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1561645 ']' 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1561645 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1561645 ']' 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1561645 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:48.624 02:03:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1561645 00:16:48.882 02:03:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:48.882 02:03:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:48.882 02:03:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1561645' 00:16:48.882 killing process with pid 1561645 00:16:48.882 02:03:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1561645 00:16:48.882 02:03:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1561645 00:16:48.882 [2024-07-14 02:03:54.551441] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:49.142 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:49.142 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:49.142 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:49.143 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:49.143 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:49.143 02:03:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.143 02:03:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.143 02:03:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.053 02:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:51.053 02:03:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:51.053 00:16:51.053 real 0m8.628s 00:16:51.053 user 0m19.605s 00:16:51.053 sys 0m2.589s 00:16:51.053 02:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:51.053 02:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.053 ************************************ 00:16:51.053 END TEST nvmf_host_management 00:16:51.053 ************************************ 00:16:51.053 02:03:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:51.053 02:03:56 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:51.053 02:03:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:51.053 02:03:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:51.053 02:03:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:51.053 ************************************ 00:16:51.053 START TEST nvmf_lvol 00:16:51.053 ************************************ 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:51.053 * Looking for test storage... 00:16:51.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.053 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:51.338 02:03:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:53.244 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:53.244 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:53.244 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:53.244 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:53.244 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:53.244 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:53.244 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:53.244 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:53.244 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:53.244 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:53.244 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:53.244 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:53.244 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:53.244 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:53.244 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:53.245 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:53.245 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:53.245 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:53.245 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:53.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:16:53.245 00:16:53.245 --- 10.0.0.2 ping statistics --- 00:16:53.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.245 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:53.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:16:53.245 00:16:53.245 --- 10.0.0.1 ping statistics --- 00:16:53.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.245 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1564161 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1564161 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1564161 ']' 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.245 02:03:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:53.245 [2024-07-14 02:03:58.887667] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:53.245 [2024-07-14 02:03:58.887749] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.245 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.504 [2024-07-14 02:03:58.953637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:53.504 [2024-07-14 02:03:59.042108] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.504 [2024-07-14 02:03:59.042175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.504 [2024-07-14 02:03:59.042196] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.504 [2024-07-14 02:03:59.042207] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.504 [2024-07-14 02:03:59.042217] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.504 [2024-07-14 02:03:59.042301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.505 [2024-07-14 02:03:59.042368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.505 [2024-07-14 02:03:59.042370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.505 02:03:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.505 02:03:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:16:53.505 02:03:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:53.505 02:03:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:53.505 02:03:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:53.505 02:03:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.505 02:03:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:53.762 [2024-07-14 02:03:59.403501] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.763 02:03:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:54.022 02:03:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:54.282 02:03:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:54.542 02:03:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:54.543 02:03:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:54.803 02:04:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:55.063 02:04:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bb8b07b0-2076-4282-81c4-f6f887f2b734 00:16:55.063 02:04:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bb8b07b0-2076-4282-81c4-f6f887f2b734 lvol 20 00:16:55.321 02:04:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d6b771b6-231e-49b6-955a-71bfd8781e76 00:16:55.321 02:04:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:55.580 02:04:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d6b771b6-231e-49b6-955a-71bfd8781e76 00:16:55.580 02:04:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:55.838 [2024-07-14 02:04:01.490370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.838 02:04:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:56.096 02:04:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1564563 00:16:56.096 02:04:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:56.096 02:04:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:56.096 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.474 02:04:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d6b771b6-231e-49b6-955a-71bfd8781e76 MY_SNAPSHOT 00:16:57.474 02:04:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8b733707-fcab-40e5-94fc-6a4642fd26b8 00:16:57.474 02:04:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d6b771b6-231e-49b6-955a-71bfd8781e76 30 00:16:57.732 02:04:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8b733707-fcab-40e5-94fc-6a4642fd26b8 MY_CLONE 00:16:57.990 02:04:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=56c986bf-f0d6-49d7-a1fd-6c7683a2437a 00:16:57.990 02:04:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 56c986bf-f0d6-49d7-a1fd-6c7683a2437a 00:16:58.556 02:04:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1564563 00:17:06.669 Initializing NVMe Controllers 00:17:06.669 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:06.669 Controller IO queue size 128, less than required. 00:17:06.669 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:06.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:06.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:06.669 Initialization complete. Launching workers. 00:17:06.669 ======================================================== 00:17:06.669 Latency(us) 00:17:06.669 Device Information : IOPS MiB/s Average min max 00:17:06.669 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10725.33 41.90 11942.07 1003.15 60490.13 00:17:06.669 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10638.53 41.56 12040.51 1625.64 59896.79 00:17:06.670 ======================================================== 00:17:06.670 Total : 21363.86 83.45 11991.09 1003.15 60490.13 00:17:06.670 00:17:06.670 02:04:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:06.928 02:04:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d6b771b6-231e-49b6-955a-71bfd8781e76 00:17:07.211 02:04:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bb8b07b0-2076-4282-81c4-f6f887f2b734 00:17:07.479 02:04:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:07.479 02:04:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:07.479 02:04:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:07.479 02:04:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:07.479 02:04:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:07.479 02:04:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:07.479 02:04:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:07.479 02:04:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:07.479 02:04:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:07.479 rmmod nvme_tcp 00:17:07.479 rmmod nvme_fabrics 00:17:07.479 rmmod nvme_keyring 00:17:07.479 02:04:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:07.479 02:04:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:07.479 02:04:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:07.479 02:04:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1564161 ']' 00:17:07.479 02:04:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1564161 00:17:07.479 02:04:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1564161 ']' 00:17:07.479 02:04:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1564161 00:17:07.479 02:04:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:17:07.479 02:04:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:07.479 02:04:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1564161 00:17:07.479 02:04:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:07.479 02:04:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:07.479 02:04:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1564161' 00:17:07.479 killing process with pid 1564161 00:17:07.479 02:04:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1564161 00:17:07.479 02:04:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1564161 00:17:07.742 02:04:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:07.742 02:04:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:07.742 02:04:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:07.742 02:04:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:07.742 02:04:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:07.742 02:04:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.742 02:04:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.742 02:04:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:10.279 00:17:10.279 real 0m18.674s 00:17:10.279 user 1m1.121s 00:17:10.279 sys 0m6.669s 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:10.279 ************************************ 00:17:10.279 END TEST nvmf_lvol 00:17:10.279 ************************************ 00:17:10.279 02:04:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:10.279 02:04:15 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:10.279 02:04:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:10.279 02:04:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.279 02:04:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:10.279 ************************************ 00:17:10.279 START TEST nvmf_lvs_grow 00:17:10.279 ************************************ 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:10.279 * Looking for test storage... 00:17:10.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.279 02:04:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:10.280 02:04:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:12.188 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:12.188 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:12.188 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:12.188 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:12.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:17:12.188 00:17:12.188 --- 10.0.0.2 ping statistics --- 00:17:12.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.188 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:12.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:17:12.188 00:17:12.188 --- 10.0.0.1 ping statistics --- 00:17:12.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.188 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:12.188 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1567738 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1567738 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1567738 ']' 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:12.189 [2024-07-14 02:04:17.585646] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:12.189 [2024-07-14 02:04:17.585735] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.189 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.189 [2024-07-14 02:04:17.665293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.189 [2024-07-14 02:04:17.758954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.189 [2024-07-14 02:04:17.759023] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.189 [2024-07-14 02:04:17.759040] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.189 [2024-07-14 02:04:17.759054] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.189 [2024-07-14 02:04:17.759066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.189 [2024-07-14 02:04:17.759108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:12.189 02:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:12.447 02:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.447 02:04:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:12.447 [2024-07-14 02:04:18.135318] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.721 02:04:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:12.721 02:04:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:12.721 02:04:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:12.721 02:04:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:12.721 ************************************ 00:17:12.721 START TEST lvs_grow_clean 00:17:12.721 ************************************ 00:17:12.721 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:17:12.721 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:12.721 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:12.721 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:12.721 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:12.721 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:12.721 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:12.721 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:12.721 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:12.721 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:12.980 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:12.980 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:13.240 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6e9a4a4f-ad0f-4ac6-8aa5-65d82ded8b10 00:17:13.240 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9a4a4f-ad0f-4ac6-8aa5-65d82ded8b10 00:17:13.240 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:13.500 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:13.500 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:13.500 02:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6e9a4a4f-ad0f-4ac6-8aa5-65d82ded8b10 lvol 150 00:17:13.760 02:04:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7bcad90d-7ee3-4068-9c7e-94743395ec5c 00:17:13.760 02:04:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:13.760 02:04:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:14.020 [2024-07-14 02:04:19.468071] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:14.020 [2024-07-14 02:04:19.468183] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:14.020 true 00:17:14.020 02:04:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9a4a4f-ad0f-4ac6-8aa5-65d82ded8b10 00:17:14.020 02:04:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:14.278 02:04:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:14.279 02:04:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:14.536 02:04:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7bcad90d-7ee3-4068-9c7e-94743395ec5c 00:17:14.793 02:04:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:15.053 [2024-07-14 02:04:20.507228] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.053 02:04:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:15.312 02:04:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1568157 00:17:15.312 02:04:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:15.312 02:04:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:15.312 02:04:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1568157 /var/tmp/bdevperf.sock 00:17:15.312 02:04:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1568157 ']' 00:17:15.312 02:04:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:15.312 02:04:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.313 02:04:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:15.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:15.313 02:04:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.313 02:04:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:15.313 [2024-07-14 02:04:20.818741] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:15.313 [2024-07-14 02:04:20.818825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1568157 ] 00:17:15.313 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.313 [2024-07-14 02:04:20.884894] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.313 [2024-07-14 02:04:20.971899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.571 02:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.571 02:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:15.571 02:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:16.139 Nvme0n1 00:17:16.139 02:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:16.398 [ 00:17:16.398 { 00:17:16.398 "name": "Nvme0n1", 00:17:16.398 "aliases": [ 00:17:16.398 "7bcad90d-7ee3-4068-9c7e-94743395ec5c" 00:17:16.398 ], 00:17:16.398 "product_name": "NVMe disk", 00:17:16.398 "block_size": 4096, 00:17:16.398 "num_blocks": 38912, 00:17:16.398 "uuid": "7bcad90d-7ee3-4068-9c7e-94743395ec5c", 00:17:16.398 "assigned_rate_limits": { 00:17:16.398 "rw_ios_per_sec": 0, 00:17:16.398 "rw_mbytes_per_sec": 0, 00:17:16.398 "r_mbytes_per_sec": 0, 00:17:16.398 "w_mbytes_per_sec": 0 00:17:16.398 }, 00:17:16.398 "claimed": false, 00:17:16.398 "zoned": false, 00:17:16.398 "supported_io_types": { 00:17:16.398 "read": true, 00:17:16.398 "write": true, 00:17:16.398 "unmap": true, 00:17:16.398 "flush": true, 00:17:16.398 "reset": true, 00:17:16.398 "nvme_admin": true, 00:17:16.398 "nvme_io": true, 00:17:16.398 "nvme_io_md": false, 00:17:16.398 "write_zeroes": true, 00:17:16.398 "zcopy": false, 00:17:16.398 "get_zone_info": false, 00:17:16.398 "zone_management": false, 00:17:16.398 "zone_append": false, 00:17:16.398 "compare": true, 00:17:16.398 "compare_and_write": true, 00:17:16.398 "abort": true, 00:17:16.398 "seek_hole": false, 00:17:16.398 "seek_data": false, 00:17:16.398 "copy": true, 00:17:16.398 "nvme_iov_md": false 00:17:16.398 }, 00:17:16.398 "memory_domains": [ 00:17:16.398 { 00:17:16.398 "dma_device_id": "system", 00:17:16.398 "dma_device_type": 1 00:17:16.398 } 00:17:16.398 ], 00:17:16.398 "driver_specific": { 00:17:16.398 "nvme": [ 00:17:16.398 { 00:17:16.398 "trid": { 00:17:16.398 "trtype": "TCP", 00:17:16.398 "adrfam": "IPv4", 00:17:16.398 "traddr": "10.0.0.2", 00:17:16.398 "trsvcid": "4420", 00:17:16.398 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:16.398 }, 00:17:16.398 "ctrlr_data": { 00:17:16.398 "cntlid": 1, 00:17:16.398 "vendor_id": "0x8086", 00:17:16.398 "model_number": "SPDK bdev Controller", 00:17:16.398 "serial_number": "SPDK0", 00:17:16.398 "firmware_revision": "24.09", 00:17:16.398 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:16.398 "oacs": { 00:17:16.398 "security": 0, 00:17:16.398 "format": 0, 00:17:16.398 "firmware": 0, 00:17:16.398 "ns_manage": 0 00:17:16.398 }, 00:17:16.398 "multi_ctrlr": true, 00:17:16.398 "ana_reporting": false 00:17:16.398 }, 00:17:16.398 "vs": { 00:17:16.398 "nvme_version": "1.3" 00:17:16.398 }, 00:17:16.398 "ns_data": { 00:17:16.398 "id": 1, 00:17:16.398 "can_share": true 00:17:16.398 } 00:17:16.398 } 00:17:16.398 ], 00:17:16.398 "mp_policy": "active_passive" 00:17:16.398 } 00:17:16.398 } 00:17:16.398 ] 00:17:16.398 02:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1568294 00:17:16.398 02:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:16.398 02:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:16.398 Running I/O for 10 seconds... 00:17:17.338 Latency(us) 00:17:17.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.338 Nvme0n1 : 1.00 13824.00 54.00 0.00 0.00 0.00 0.00 0.00 00:17:17.338 =================================================================================================================== 00:17:17.338 Total : 13824.00 54.00 0.00 0.00 0.00 0.00 0.00 00:17:17.338 00:17:18.273 02:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6e9a4a4f-ad0f-4ac6-8aa5-65d82ded8b10 00:17:18.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.531 Nvme0n1 : 2.00 14080.00 55.00 0.00 0.00 0.00 0.00 0.00 00:17:18.531 =================================================================================================================== 00:17:18.531 Total : 14080.00 55.00 0.00 0.00 0.00 0.00 0.00 00:17:18.531 00:17:18.531 true 00:17:18.531 02:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9a4a4f-ad0f-4ac6-8aa5-65d82ded8b10 00:17:18.531 02:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:18.791 02:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:18.791 02:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:18.791 02:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1568294 00:17:19.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.361 Nvme0n1 : 3.00 14165.33 55.33 0.00 0.00 0.00 0.00 0.00 00:17:19.361 =================================================================================================================== 00:17:19.361 Total : 14165.33 55.33 0.00 0.00 0.00 0.00 0.00 00:17:19.361 00:17:20.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.299 Nvme0n1 : 4.00 14240.00 55.62 0.00 0.00 0.00 0.00 0.00 00:17:20.299 =================================================================================================================== 00:17:20.299 Total : 14240.00 55.62 0.00 0.00 0.00 0.00 0.00 00:17:20.299 00:17:21.674 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.675 Nvme0n1 : 5.00 14273.60 55.76 0.00 0.00 0.00 0.00 0.00 00:17:21.675 =================================================================================================================== 00:17:21.675 Total : 14273.60 55.76 0.00 0.00 0.00 0.00 0.00 00:17:21.675 00:17:22.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.609 Nvme0n1 : 6.00 14293.33 55.83 0.00 0.00 0.00 0.00 0.00 00:17:22.609 =================================================================================================================== 00:17:22.609 Total : 14293.33 55.83 0.00 0.00 0.00 0.00 0.00 00:17:22.609 00:17:23.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.546 Nvme0n1 : 7.00 14356.71 56.08 0.00 0.00 0.00 0.00 0.00 00:17:23.546 =================================================================================================================== 00:17:23.546 Total : 14356.71 56.08 0.00 0.00 0.00 0.00 0.00 00:17:23.546 00:17:24.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.512 Nvme0n1 : 8.00 14392.88 56.22 0.00 0.00 0.00 0.00 0.00 00:17:24.512 =================================================================================================================== 00:17:24.512 Total : 14392.88 56.22 0.00 0.00 0.00 0.00 0.00 00:17:24.512 00:17:25.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.450 Nvme0n1 : 9.00 14422.22 56.34 0.00 0.00 0.00 0.00 0.00 00:17:25.450 =================================================================================================================== 00:17:25.450 Total : 14422.22 56.34 0.00 0.00 0.00 0.00 0.00 00:17:25.450 00:17:26.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.388 Nvme0n1 : 10.00 14458.30 56.48 0.00 0.00 0.00 0.00 0.00 00:17:26.388 =================================================================================================================== 00:17:26.388 Total : 14458.30 56.48 0.00 0.00 0.00 0.00 0.00 00:17:26.388 00:17:26.388 00:17:26.388 Latency(us) 00:17:26.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.388 Nvme0n1 : 10.01 14461.69 56.49 0.00 0.00 8844.43 5000.15 15631.55 00:17:26.388 =================================================================================================================== 00:17:26.388 Total : 14461.69 56.49 0.00 0.00 8844.43 5000.15 15631.55 00:17:26.388 0 00:17:26.388 02:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1568157 00:17:26.388 02:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1568157 ']' 00:17:26.388 02:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1568157 00:17:26.388 02:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:17:26.388 02:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:26.388 02:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1568157 00:17:26.388 02:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:26.388 02:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:26.388 02:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1568157' 00:17:26.388 killing process with pid 1568157 00:17:26.388 02:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1568157 00:17:26.388 Received shutdown signal, test time was about 10.000000 seconds 00:17:26.388 00:17:26.388 Latency(us) 00:17:26.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.388 =================================================================================================================== 00:17:26.388 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:26.388 02:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1568157 00:17:26.647 02:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:26.905 02:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:27.471 02:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9a4a4f-ad0f-4ac6-8aa5-65d82ded8b10 00:17:27.471 02:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:27.471 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:27.471 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:27.471 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:27.736 [2024-07-14 02:04:33.364670] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:27.736 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9a4a4f-ad0f-4ac6-8aa5-65d82ded8b10 00:17:27.736 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:27.736 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9a4a4f-ad0f-4ac6-8aa5-65d82ded8b10 00:17:27.736 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.736 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.736 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.736 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.736 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.736 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.736 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.736 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:27.736 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9a4a4f-ad0f-4ac6-8aa5-65d82ded8b10 00:17:27.993 request: 00:17:27.993 { 00:17:27.993 "uuid": "6e9a4a4f-ad0f-4ac6-8aa5-65d82ded8b10", 00:17:27.993 "method": "bdev_lvol_get_lvstores", 00:17:27.993 "req_id": 1 00:17:27.993 } 00:17:27.993 Got JSON-RPC error response 00:17:27.993 response: 00:17:27.993 { 00:17:27.993 "code": -19, 00:17:27.993 "message": "No such device" 00:17:27.993 } 00:17:27.993 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:27.993 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:27.993 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:27.993 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:27.993 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:28.250 aio_bdev 00:17:28.250 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7bcad90d-7ee3-4068-9c7e-94743395ec5c 00:17:28.250 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=7bcad90d-7ee3-4068-9c7e-94743395ec5c 00:17:28.250 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:28.250 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:17:28.250 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:28.250 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:28.250 02:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:28.509 02:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7bcad90d-7ee3-4068-9c7e-94743395ec5c -t 2000 00:17:28.766 [ 00:17:28.766 { 00:17:28.766 "name": "7bcad90d-7ee3-4068-9c7e-94743395ec5c", 00:17:28.766 "aliases": [ 00:17:28.766 "lvs/lvol" 00:17:28.766 ], 00:17:28.766 "product_name": "Logical Volume", 00:17:28.766 "block_size": 4096, 00:17:28.766 "num_blocks": 38912, 00:17:28.766 "uuid": "7bcad90d-7ee3-4068-9c7e-94743395ec5c", 00:17:28.766 "assigned_rate_limits": { 00:17:28.766 "rw_ios_per_sec": 0, 00:17:28.766 "rw_mbytes_per_sec": 0, 00:17:28.766 "r_mbytes_per_sec": 0, 00:17:28.766 "w_mbytes_per_sec": 0 00:17:28.766 }, 00:17:28.766 "claimed": false, 00:17:28.766 "zoned": false, 00:17:28.766 "supported_io_types": { 00:17:28.766 "read": true, 00:17:28.766 "write": true, 00:17:28.766 "unmap": true, 00:17:28.766 "flush": false, 00:17:28.766 "reset": true, 00:17:28.766 "nvme_admin": false, 00:17:28.766 "nvme_io": false, 00:17:28.766 "nvme_io_md": false, 00:17:28.766 "write_zeroes": true, 00:17:28.766 "zcopy": false, 00:17:28.766 "get_zone_info": false, 00:17:28.766 "zone_management": false, 00:17:28.766 "zone_append": false, 00:17:28.766 "compare": false, 00:17:28.766 "compare_and_write": false, 00:17:28.766 "abort": false, 00:17:28.767 "seek_hole": true, 00:17:28.767 "seek_data": true, 00:17:28.767 "copy": false, 00:17:28.767 "nvme_iov_md": false 00:17:28.767 }, 00:17:28.767 "driver_specific": { 00:17:28.767 "lvol": { 00:17:28.767 "lvol_store_uuid": "6e9a4a4f-ad0f-4ac6-8aa5-65d82ded8b10", 00:17:28.767 "base_bdev": "aio_bdev", 00:17:28.767 "thin_provision": false, 00:17:28.767 "num_allocated_clusters": 38, 00:17:28.767 "snapshot": false, 00:17:28.767 "clone": false, 00:17:28.767 "esnap_clone": false 00:17:28.767 } 00:17:28.767 } 00:17:28.767 } 00:17:28.767 ] 00:17:28.767 02:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:17:28.767 02:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9a4a4f-ad0f-4ac6-8aa5-65d82ded8b10 00:17:28.767 02:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:29.025 02:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:29.025 02:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9a4a4f-ad0f-4ac6-8aa5-65d82ded8b10 00:17:29.025 02:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:29.283 02:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:29.283 02:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7bcad90d-7ee3-4068-9c7e-94743395ec5c 00:17:29.541 02:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6e9a4a4f-ad0f-4ac6-8aa5-65d82ded8b10 00:17:29.800 02:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:30.059 02:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:30.059 00:17:30.059 real 0m17.508s 00:17:30.059 user 0m17.012s 00:17:30.059 sys 0m1.891s 00:17:30.059 02:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:30.059 02:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:30.059 ************************************ 00:17:30.059 END TEST lvs_grow_clean 00:17:30.059 ************************************ 00:17:30.059 02:04:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:30.059 02:04:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:30.059 02:04:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:30.059 02:04:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:30.059 02:04:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:30.059 ************************************ 00:17:30.059 START TEST lvs_grow_dirty 00:17:30.059 ************************************ 00:17:30.059 02:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:17:30.059 02:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:30.059 02:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:30.059 02:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:30.059 02:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:30.059 02:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:30.059 02:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:30.059 02:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:30.060 02:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:30.060 02:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:30.318 02:04:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:30.318 02:04:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:30.887 02:04:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1d3a1767-c494-420d-98f3-5fbabcab1903 00:17:30.887 02:04:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d3a1767-c494-420d-98f3-5fbabcab1903 00:17:30.887 02:04:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:30.887 02:04:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:30.887 02:04:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:30.887 02:04:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1d3a1767-c494-420d-98f3-5fbabcab1903 lvol 150 00:17:31.145 02:04:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a2f662ba-d173-4875-92e3-106ff34f8afb 00:17:31.145 02:04:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:31.145 02:04:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:31.405 [2024-07-14 02:04:37.091445] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:31.405 [2024-07-14 02:04:37.091598] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:31.405 true 00:17:31.665 02:04:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d3a1767-c494-420d-98f3-5fbabcab1903 00:17:31.665 02:04:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:31.924 02:04:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:31.924 02:04:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:32.182 02:04:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a2f662ba-d173-4875-92e3-106ff34f8afb 00:17:32.442 02:04:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:32.442 [2024-07-14 02:04:38.118525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.442 02:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:32.701 02:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1570319 00:17:32.701 02:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:32.701 02:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.701 02:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1570319 /var/tmp/bdevperf.sock 00:17:32.701 02:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1570319 ']' 00:17:32.701 02:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.701 02:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.701 02:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.701 02:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.701 02:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:32.960 [2024-07-14 02:04:38.407435] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:32.960 [2024-07-14 02:04:38.407505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1570319 ] 00:17:32.960 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.960 [2024-07-14 02:04:38.468897] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.960 [2024-07-14 02:04:38.560954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.221 02:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:33.221 02:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:33.221 02:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:33.479 Nvme0n1 00:17:33.479 02:04:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:33.738 [ 00:17:33.738 { 00:17:33.738 "name": "Nvme0n1", 00:17:33.738 "aliases": [ 00:17:33.738 "a2f662ba-d173-4875-92e3-106ff34f8afb" 00:17:33.738 ], 00:17:33.738 "product_name": "NVMe disk", 00:17:33.738 "block_size": 4096, 00:17:33.738 "num_blocks": 38912, 00:17:33.738 "uuid": "a2f662ba-d173-4875-92e3-106ff34f8afb", 00:17:33.738 "assigned_rate_limits": { 00:17:33.738 "rw_ios_per_sec": 0, 00:17:33.738 "rw_mbytes_per_sec": 0, 00:17:33.738 "r_mbytes_per_sec": 0, 00:17:33.738 "w_mbytes_per_sec": 0 00:17:33.738 }, 00:17:33.738 "claimed": false, 00:17:33.738 "zoned": false, 00:17:33.738 "supported_io_types": { 00:17:33.738 "read": true, 00:17:33.738 "write": true, 00:17:33.738 "unmap": true, 00:17:33.738 "flush": true, 00:17:33.738 "reset": true, 00:17:33.738 "nvme_admin": true, 00:17:33.738 "nvme_io": true, 00:17:33.738 "nvme_io_md": false, 00:17:33.738 "write_zeroes": true, 00:17:33.738 "zcopy": false, 00:17:33.738 "get_zone_info": false, 00:17:33.738 "zone_management": false, 00:17:33.738 "zone_append": false, 00:17:33.738 "compare": true, 00:17:33.738 "compare_and_write": true, 00:17:33.738 "abort": true, 00:17:33.738 "seek_hole": false, 00:17:33.738 "seek_data": false, 00:17:33.738 "copy": true, 00:17:33.738 "nvme_iov_md": false 00:17:33.738 }, 00:17:33.738 "memory_domains": [ 00:17:33.738 { 00:17:33.738 "dma_device_id": "system", 00:17:33.738 "dma_device_type": 1 00:17:33.738 } 00:17:33.738 ], 00:17:33.738 "driver_specific": { 00:17:33.738 "nvme": [ 00:17:33.738 { 00:17:33.738 "trid": { 00:17:33.738 "trtype": "TCP", 00:17:33.738 "adrfam": "IPv4", 00:17:33.738 "traddr": "10.0.0.2", 00:17:33.738 "trsvcid": "4420", 00:17:33.738 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:33.738 }, 00:17:33.738 "ctrlr_data": { 00:17:33.738 "cntlid": 1, 00:17:33.738 "vendor_id": "0x8086", 00:17:33.738 "model_number": "SPDK bdev Controller", 00:17:33.738 "serial_number": "SPDK0", 00:17:33.738 "firmware_revision": "24.09", 00:17:33.738 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:33.738 "oacs": { 00:17:33.738 "security": 0, 00:17:33.738 "format": 0, 00:17:33.738 "firmware": 0, 00:17:33.738 "ns_manage": 0 00:17:33.738 }, 00:17:33.738 "multi_ctrlr": true, 00:17:33.738 "ana_reporting": false 00:17:33.738 }, 00:17:33.738 "vs": { 00:17:33.738 "nvme_version": "1.3" 00:17:33.738 }, 00:17:33.738 "ns_data": { 00:17:33.738 "id": 1, 00:17:33.738 "can_share": true 00:17:33.738 } 00:17:33.738 } 00:17:33.738 ], 00:17:33.738 "mp_policy": "active_passive" 00:17:33.738 } 00:17:33.738 } 00:17:33.738 ] 00:17:33.738 02:04:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1570456 00:17:33.738 02:04:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:33.738 02:04:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:33.998 Running I/O for 10 seconds... 00:17:34.938 Latency(us) 00:17:34.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.938 Nvme0n1 : 1.00 13139.00 51.32 0.00 0.00 0.00 0.00 0.00 00:17:34.938 =================================================================================================================== 00:17:34.938 Total : 13139.00 51.32 0.00 0.00 0.00 0.00 0.00 00:17:34.938 00:17:35.878 02:04:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1d3a1767-c494-420d-98f3-5fbabcab1903 00:17:35.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.879 Nvme0n1 : 2.00 13353.50 52.16 0.00 0.00 0.00 0.00 0.00 00:17:35.879 =================================================================================================================== 00:17:35.879 Total : 13353.50 52.16 0.00 0.00 0.00 0.00 0.00 00:17:35.879 00:17:36.137 true 00:17:36.137 02:04:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d3a1767-c494-420d-98f3-5fbabcab1903 00:17:36.137 02:04:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:36.397 02:04:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:36.397 02:04:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:36.397 02:04:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1570456 00:17:36.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.969 Nvme0n1 : 3.00 13422.33 52.43 0.00 0.00 0.00 0.00 0.00 00:17:36.969 =================================================================================================================== 00:17:36.969 Total : 13422.33 52.43 0.00 0.00 0.00 0.00 0.00 00:17:36.969 00:17:37.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.910 Nvme0n1 : 4.00 13500.75 52.74 0.00 0.00 0.00 0.00 0.00 00:17:37.910 =================================================================================================================== 00:17:37.910 Total : 13500.75 52.74 0.00 0.00 0.00 0.00 0.00 00:17:37.910 00:17:38.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.845 Nvme0n1 : 5.00 13568.60 53.00 0.00 0.00 0.00 0.00 0.00 00:17:38.845 =================================================================================================================== 00:17:38.845 Total : 13568.60 53.00 0.00 0.00 0.00 0.00 0.00 00:17:38.845 00:17:39.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.820 Nvme0n1 : 6.00 13596.50 53.11 0.00 0.00 0.00 0.00 0.00 00:17:39.820 =================================================================================================================== 00:17:39.820 Total : 13596.50 53.11 0.00 0.00 0.00 0.00 0.00 00:17:39.820 00:17:41.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.199 Nvme0n1 : 7.00 13619.86 53.20 0.00 0.00 0.00 0.00 0.00 00:17:41.199 =================================================================================================================== 00:17:41.199 Total : 13619.86 53.20 0.00 0.00 0.00 0.00 0.00 00:17:41.199 00:17:42.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.136 Nvme0n1 : 8.00 13651.38 53.33 0.00 0.00 0.00 0.00 0.00 00:17:42.136 =================================================================================================================== 00:17:42.136 Total : 13651.38 53.33 0.00 0.00 0.00 0.00 0.00 00:17:42.136 00:17:43.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.076 Nvme0n1 : 9.00 13681.22 53.44 0.00 0.00 0.00 0.00 0.00 00:17:43.076 =================================================================================================================== 00:17:43.076 Total : 13681.22 53.44 0.00 0.00 0.00 0.00 0.00 00:17:43.076 00:17:44.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.013 Nvme0n1 : 10.00 13717.10 53.58 0.00 0.00 0.00 0.00 0.00 00:17:44.013 =================================================================================================================== 00:17:44.013 Total : 13717.10 53.58 0.00 0.00 0.00 0.00 0.00 00:17:44.013 00:17:44.013 00:17:44.013 Latency(us) 00:17:44.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.013 Nvme0n1 : 10.01 13717.26 53.58 0.00 0.00 9318.99 2924.85 12087.75 00:17:44.013 =================================================================================================================== 00:17:44.013 Total : 13717.26 53.58 0.00 0.00 9318.99 2924.85 12087.75 00:17:44.013 0 00:17:44.013 02:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1570319 00:17:44.013 02:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1570319 ']' 00:17:44.013 02:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1570319 00:17:44.014 02:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:17:44.014 02:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:44.014 02:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1570319 00:17:44.014 02:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:44.014 02:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:44.014 02:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1570319' 00:17:44.014 killing process with pid 1570319 00:17:44.014 02:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1570319 00:17:44.014 Received shutdown signal, test time was about 10.000000 seconds 00:17:44.014 00:17:44.014 Latency(us) 00:17:44.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.014 =================================================================================================================== 00:17:44.014 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:44.014 02:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1570319 00:17:44.271 02:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:44.529 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:44.786 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d3a1767-c494-420d-98f3-5fbabcab1903 00:17:44.786 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1567738 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1567738 00:17:45.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1567738 Killed "${NVMF_APP[@]}" "$@" 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1571788 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1571788 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1571788 ']' 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.044 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:45.044 [2024-07-14 02:04:50.712653] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:45.044 [2024-07-14 02:04:50.712729] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.302 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.302 [2024-07-14 02:04:50.777192] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.302 [2024-07-14 02:04:50.860571] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.302 [2024-07-14 02:04:50.860627] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.302 [2024-07-14 02:04:50.860655] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.302 [2024-07-14 02:04:50.860672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.302 [2024-07-14 02:04:50.860683] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.302 [2024-07-14 02:04:50.860713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.302 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.302 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:45.302 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:45.302 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:45.302 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:45.302 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.302 02:04:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:45.868 [2024-07-14 02:04:51.264634] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:45.868 [2024-07-14 02:04:51.264773] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:45.868 [2024-07-14 02:04:51.264831] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:45.868 02:04:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:45.868 02:04:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a2f662ba-d173-4875-92e3-106ff34f8afb 00:17:45.868 02:04:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=a2f662ba-d173-4875-92e3-106ff34f8afb 00:17:45.868 02:04:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:45.868 02:04:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:45.868 02:04:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:45.868 02:04:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:45.868 02:04:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:45.868 02:04:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a2f662ba-d173-4875-92e3-106ff34f8afb -t 2000 00:17:46.127 [ 00:17:46.127 { 00:17:46.127 "name": "a2f662ba-d173-4875-92e3-106ff34f8afb", 00:17:46.127 "aliases": [ 00:17:46.127 "lvs/lvol" 00:17:46.127 ], 00:17:46.127 "product_name": "Logical Volume", 00:17:46.127 "block_size": 4096, 00:17:46.127 "num_blocks": 38912, 00:17:46.127 "uuid": "a2f662ba-d173-4875-92e3-106ff34f8afb", 00:17:46.127 "assigned_rate_limits": { 00:17:46.127 "rw_ios_per_sec": 0, 00:17:46.127 "rw_mbytes_per_sec": 0, 00:17:46.127 "r_mbytes_per_sec": 0, 00:17:46.127 "w_mbytes_per_sec": 0 00:17:46.127 }, 00:17:46.127 "claimed": false, 00:17:46.127 "zoned": false, 00:17:46.127 "supported_io_types": { 00:17:46.127 "read": true, 00:17:46.127 "write": true, 00:17:46.127 "unmap": true, 00:17:46.127 "flush": false, 00:17:46.127 "reset": true, 00:17:46.127 "nvme_admin": false, 00:17:46.127 "nvme_io": false, 00:17:46.127 "nvme_io_md": false, 00:17:46.127 "write_zeroes": true, 00:17:46.127 "zcopy": false, 00:17:46.128 "get_zone_info": false, 00:17:46.128 "zone_management": false, 00:17:46.128 "zone_append": false, 00:17:46.128 "compare": false, 00:17:46.128 "compare_and_write": false, 00:17:46.128 "abort": false, 00:17:46.128 "seek_hole": true, 00:17:46.128 "seek_data": true, 00:17:46.128 "copy": false, 00:17:46.128 "nvme_iov_md": false 00:17:46.128 }, 00:17:46.128 "driver_specific": { 00:17:46.128 "lvol": { 00:17:46.128 "lvol_store_uuid": "1d3a1767-c494-420d-98f3-5fbabcab1903", 00:17:46.128 "base_bdev": "aio_bdev", 00:17:46.128 "thin_provision": false, 00:17:46.128 "num_allocated_clusters": 38, 00:17:46.128 "snapshot": false, 00:17:46.128 "clone": false, 00:17:46.128 "esnap_clone": false 00:17:46.128 } 00:17:46.128 } 00:17:46.128 } 00:17:46.128 ] 00:17:46.387 02:04:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:46.387 02:04:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d3a1767-c494-420d-98f3-5fbabcab1903 00:17:46.387 02:04:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:46.646 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:46.646 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d3a1767-c494-420d-98f3-5fbabcab1903 00:17:46.646 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:46.646 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:46.646 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:46.904 [2024-07-14 02:04:52.585748] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:47.162 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d3a1767-c494-420d-98f3-5fbabcab1903 00:17:47.162 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:47.162 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d3a1767-c494-420d-98f3-5fbabcab1903 00:17:47.162 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.162 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.162 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.162 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.162 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.162 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.162 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.162 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:47.162 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d3a1767-c494-420d-98f3-5fbabcab1903 00:17:47.420 request: 00:17:47.420 { 00:17:47.420 "uuid": "1d3a1767-c494-420d-98f3-5fbabcab1903", 00:17:47.420 "method": "bdev_lvol_get_lvstores", 00:17:47.420 "req_id": 1 00:17:47.420 } 00:17:47.420 Got JSON-RPC error response 00:17:47.420 response: 00:17:47.420 { 00:17:47.420 "code": -19, 00:17:47.420 "message": "No such device" 00:17:47.420 } 00:17:47.420 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:47.420 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:47.420 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:47.420 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:47.420 02:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:47.678 aio_bdev 00:17:47.678 02:04:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a2f662ba-d173-4875-92e3-106ff34f8afb 00:17:47.678 02:04:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=a2f662ba-d173-4875-92e3-106ff34f8afb 00:17:47.678 02:04:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:47.678 02:04:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:47.678 02:04:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:47.678 02:04:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:47.678 02:04:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:47.934 02:04:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a2f662ba-d173-4875-92e3-106ff34f8afb -t 2000 00:17:48.192 [ 00:17:48.192 { 00:17:48.192 "name": "a2f662ba-d173-4875-92e3-106ff34f8afb", 00:17:48.192 "aliases": [ 00:17:48.192 "lvs/lvol" 00:17:48.192 ], 00:17:48.192 "product_name": "Logical Volume", 00:17:48.192 "block_size": 4096, 00:17:48.192 "num_blocks": 38912, 00:17:48.192 "uuid": "a2f662ba-d173-4875-92e3-106ff34f8afb", 00:17:48.192 "assigned_rate_limits": { 00:17:48.192 "rw_ios_per_sec": 0, 00:17:48.192 "rw_mbytes_per_sec": 0, 00:17:48.192 "r_mbytes_per_sec": 0, 00:17:48.192 "w_mbytes_per_sec": 0 00:17:48.192 }, 00:17:48.192 "claimed": false, 00:17:48.192 "zoned": false, 00:17:48.192 "supported_io_types": { 00:17:48.192 "read": true, 00:17:48.192 "write": true, 00:17:48.192 "unmap": true, 00:17:48.192 "flush": false, 00:17:48.192 "reset": true, 00:17:48.192 "nvme_admin": false, 00:17:48.192 "nvme_io": false, 00:17:48.192 "nvme_io_md": false, 00:17:48.192 "write_zeroes": true, 00:17:48.192 "zcopy": false, 00:17:48.192 "get_zone_info": false, 00:17:48.192 "zone_management": false, 00:17:48.192 "zone_append": false, 00:17:48.192 "compare": false, 00:17:48.192 "compare_and_write": false, 00:17:48.192 "abort": false, 00:17:48.192 "seek_hole": true, 00:17:48.192 "seek_data": true, 00:17:48.192 "copy": false, 00:17:48.192 "nvme_iov_md": false 00:17:48.192 }, 00:17:48.192 "driver_specific": { 00:17:48.192 "lvol": { 00:17:48.192 "lvol_store_uuid": "1d3a1767-c494-420d-98f3-5fbabcab1903", 00:17:48.192 "base_bdev": "aio_bdev", 00:17:48.192 "thin_provision": false, 00:17:48.192 "num_allocated_clusters": 38, 00:17:48.192 "snapshot": false, 00:17:48.192 "clone": false, 00:17:48.192 "esnap_clone": false 00:17:48.192 } 00:17:48.192 } 00:17:48.192 } 00:17:48.192 ] 00:17:48.192 02:04:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:48.192 02:04:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d3a1767-c494-420d-98f3-5fbabcab1903 00:17:48.192 02:04:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:48.450 02:04:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:48.450 02:04:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d3a1767-c494-420d-98f3-5fbabcab1903 00:17:48.450 02:04:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:48.708 02:04:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:48.708 02:04:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a2f662ba-d173-4875-92e3-106ff34f8afb 00:17:48.967 02:04:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1d3a1767-c494-420d-98f3-5fbabcab1903 00:17:49.225 02:04:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:49.482 02:04:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:49.482 00:17:49.482 real 0m19.402s 00:17:49.482 user 0m47.568s 00:17:49.482 sys 0m5.363s 00:17:49.482 02:04:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:49.482 02:04:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:49.482 ************************************ 00:17:49.482 END TEST lvs_grow_dirty 00:17:49.482 ************************************ 00:17:49.482 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:49.482 02:04:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:49.482 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:17:49.482 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:17:49.482 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:49.482 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:49.740 nvmf_trace.0 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:49.740 rmmod nvme_tcp 00:17:49.740 rmmod nvme_fabrics 00:17:49.740 rmmod nvme_keyring 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1571788 ']' 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1571788 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1571788 ']' 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1571788 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1571788 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1571788' 00:17:49.740 killing process with pid 1571788 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1571788 00:17:49.740 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1571788 00:17:49.999 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:49.999 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:49.999 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:49.999 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:49.999 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:49.999 02:04:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.999 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.999 02:04:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.907 02:04:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:51.907 00:17:51.907 real 0m42.162s 00:17:51.907 user 1m10.454s 00:17:51.907 sys 0m9.110s 00:17:51.907 02:04:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.907 02:04:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:51.907 ************************************ 00:17:51.907 END TEST nvmf_lvs_grow 00:17:51.907 ************************************ 00:17:51.907 02:04:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:51.907 02:04:57 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:51.907 02:04:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:51.907 02:04:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.907 02:04:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:52.165 ************************************ 00:17:52.165 START TEST nvmf_bdev_io_wait 00:17:52.165 ************************************ 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:52.165 * Looking for test storage... 00:17:52.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.165 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:52.166 02:04:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:54.070 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:54.070 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:54.070 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:54.070 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:54.071 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:54.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:17:54.071 00:17:54.071 --- 10.0.0.2 ping statistics --- 00:17:54.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.071 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:54.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:17:54.071 00:17:54.071 --- 10.0.0.1 ping statistics --- 00:17:54.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.071 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1574300 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1574300 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1574300 ']' 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.071 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:54.331 [2024-07-14 02:04:59.772725] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:54.331 [2024-07-14 02:04:59.772821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.331 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.331 [2024-07-14 02:04:59.844391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:54.331 [2024-07-14 02:04:59.936260] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.331 [2024-07-14 02:04:59.936324] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.331 [2024-07-14 02:04:59.936350] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.331 [2024-07-14 02:04:59.936364] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.331 [2024-07-14 02:04:59.936375] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.331 [2024-07-14 02:04:59.936468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.331 [2024-07-14 02:04:59.936536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.331 [2024-07-14 02:04:59.936636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.331 [2024-07-14 02:04:59.936639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.331 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.331 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:17:54.331 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:54.331 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:54.331 02:04:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:54.331 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.331 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:54.331 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.331 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:54.331 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.331 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:54.331 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.331 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:54.589 [2024-07-14 02:05:00.085466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:54.589 Malloc0 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:54.589 [2024-07-14 02:05:00.148625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1574331 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1574332 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1574334 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:54.589 { 00:17:54.589 "params": { 00:17:54.589 "name": "Nvme$subsystem", 00:17:54.589 "trtype": "$TEST_TRANSPORT", 00:17:54.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:54.589 "adrfam": "ipv4", 00:17:54.589 "trsvcid": "$NVMF_PORT", 00:17:54.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:54.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:54.589 "hdgst": ${hdgst:-false}, 00:17:54.589 "ddgst": ${ddgst:-false} 00:17:54.589 }, 00:17:54.589 "method": "bdev_nvme_attach_controller" 00:17:54.589 } 00:17:54.589 EOF 00:17:54.589 )") 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1574337 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:54.589 { 00:17:54.589 "params": { 00:17:54.589 "name": "Nvme$subsystem", 00:17:54.589 "trtype": "$TEST_TRANSPORT", 00:17:54.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:54.589 "adrfam": "ipv4", 00:17:54.589 "trsvcid": "$NVMF_PORT", 00:17:54.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:54.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:54.589 "hdgst": ${hdgst:-false}, 00:17:54.589 "ddgst": ${ddgst:-false} 00:17:54.589 }, 00:17:54.589 "method": "bdev_nvme_attach_controller" 00:17:54.589 } 00:17:54.589 EOF 00:17:54.589 )") 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:54.589 { 00:17:54.589 "params": { 00:17:54.589 "name": "Nvme$subsystem", 00:17:54.589 "trtype": "$TEST_TRANSPORT", 00:17:54.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:54.589 "adrfam": "ipv4", 00:17:54.589 "trsvcid": "$NVMF_PORT", 00:17:54.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:54.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:54.589 "hdgst": ${hdgst:-false}, 00:17:54.589 "ddgst": ${ddgst:-false} 00:17:54.589 }, 00:17:54.589 "method": "bdev_nvme_attach_controller" 00:17:54.589 } 00:17:54.589 EOF 00:17:54.589 )") 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:54.589 { 00:17:54.589 "params": { 00:17:54.589 "name": "Nvme$subsystem", 00:17:54.589 "trtype": "$TEST_TRANSPORT", 00:17:54.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:54.589 "adrfam": "ipv4", 00:17:54.589 "trsvcid": "$NVMF_PORT", 00:17:54.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:54.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:54.589 "hdgst": ${hdgst:-false}, 00:17:54.589 "ddgst": ${ddgst:-false} 00:17:54.589 }, 00:17:54.589 "method": "bdev_nvme_attach_controller" 00:17:54.589 } 00:17:54.589 EOF 00:17:54.589 )") 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1574331 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:54.589 "params": { 00:17:54.589 "name": "Nvme1", 00:17:54.589 "trtype": "tcp", 00:17:54.589 "traddr": "10.0.0.2", 00:17:54.589 "adrfam": "ipv4", 00:17:54.589 "trsvcid": "4420", 00:17:54.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.589 "hdgst": false, 00:17:54.589 "ddgst": false 00:17:54.589 }, 00:17:54.589 "method": "bdev_nvme_attach_controller" 00:17:54.589 }' 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:54.589 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:54.589 "params": { 00:17:54.589 "name": "Nvme1", 00:17:54.590 "trtype": "tcp", 00:17:54.590 "traddr": "10.0.0.2", 00:17:54.590 "adrfam": "ipv4", 00:17:54.590 "trsvcid": "4420", 00:17:54.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.590 "hdgst": false, 00:17:54.590 "ddgst": false 00:17:54.590 }, 00:17:54.590 "method": "bdev_nvme_attach_controller" 00:17:54.590 }' 00:17:54.590 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:54.590 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:54.590 "params": { 00:17:54.590 "name": "Nvme1", 00:17:54.590 "trtype": "tcp", 00:17:54.590 "traddr": "10.0.0.2", 00:17:54.590 "adrfam": "ipv4", 00:17:54.590 "trsvcid": "4420", 00:17:54.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.590 "hdgst": false, 00:17:54.590 "ddgst": false 00:17:54.590 }, 00:17:54.590 "method": "bdev_nvme_attach_controller" 00:17:54.590 }' 00:17:54.590 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:54.590 02:05:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:54.590 "params": { 00:17:54.590 "name": "Nvme1", 00:17:54.590 "trtype": "tcp", 00:17:54.590 "traddr": "10.0.0.2", 00:17:54.590 "adrfam": "ipv4", 00:17:54.590 "trsvcid": "4420", 00:17:54.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.590 "hdgst": false, 00:17:54.590 "ddgst": false 00:17:54.590 }, 00:17:54.590 "method": "bdev_nvme_attach_controller" 00:17:54.590 }' 00:17:54.590 [2024-07-14 02:05:00.197162] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:54.590 [2024-07-14 02:05:00.197159] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:54.590 [2024-07-14 02:05:00.197160] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:54.590 [2024-07-14 02:05:00.197159] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:54.590 [2024-07-14 02:05:00.197255] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:54.590 [2024-07-14 02:05:00.197276] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 02:05:00.197276] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 02:05:00.197277] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:54.590 --proc-type=auto ] 00:17:54.590 --proc-type=auto ] 00:17:54.590 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.847 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.847 [2024-07-14 02:05:00.376913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.847 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.847 [2024-07-14 02:05:00.451795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:54.847 [2024-07-14 02:05:00.476726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.136 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.136 [2024-07-14 02:05:00.552018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:55.136 [2024-07-14 02:05:00.576271] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.136 [2024-07-14 02:05:00.645246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.136 [2024-07-14 02:05:00.650229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:55.136 [2024-07-14 02:05:00.714905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:17:55.136 Running I/O for 1 seconds... 00:17:55.396 Running I/O for 1 seconds... 00:17:55.396 Running I/O for 1 seconds... 00:17:55.396 Running I/O for 1 seconds... 00:17:56.335 00:17:56.335 Latency(us) 00:17:56.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.335 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:56.335 Nvme1n1 : 1.00 195565.31 763.93 0.00 0.00 651.88 286.72 910.22 00:17:56.335 =================================================================================================================== 00:17:56.335 Total : 195565.31 763.93 0.00 0.00 651.88 286.72 910.22 00:17:56.335 00:17:56.335 Latency(us) 00:17:56.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.335 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:56.335 Nvme1n1 : 1.01 11237.65 43.90 0.00 0.00 11339.01 5170.06 19029.71 00:17:56.335 =================================================================================================================== 00:17:56.335 Total : 11237.65 43.90 0.00 0.00 11339.01 5170.06 19029.71 00:17:56.335 00:17:56.335 Latency(us) 00:17:56.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.335 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:56.335 Nvme1n1 : 1.01 8481.03 33.13 0.00 0.00 15014.23 9806.13 25631.86 00:17:56.335 =================================================================================================================== 00:17:56.335 Total : 8481.03 33.13 0.00 0.00 15014.23 9806.13 25631.86 00:17:56.335 00:17:56.335 Latency(us) 00:17:56.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.335 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:56.335 Nvme1n1 : 1.01 9379.63 36.64 0.00 0.00 13587.71 8009.96 25243.50 00:17:56.335 =================================================================================================================== 00:17:56.335 Total : 9379.63 36.64 0.00 0.00 13587.71 8009.96 25243.50 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1574332 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1574334 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1574337 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:56.594 rmmod nvme_tcp 00:17:56.594 rmmod nvme_fabrics 00:17:56.594 rmmod nvme_keyring 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:56.594 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:56.595 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:56.595 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1574300 ']' 00:17:56.595 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1574300 00:17:56.595 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1574300 ']' 00:17:56.595 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1574300 00:17:56.595 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:17:56.595 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:56.595 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1574300 00:17:56.854 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:56.854 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:56.854 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1574300' 00:17:56.854 killing process with pid 1574300 00:17:56.854 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1574300 00:17:56.854 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1574300 00:17:56.854 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:56.854 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:56.854 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:56.854 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:56.854 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:56.854 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.854 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.854 02:05:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.395 02:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:59.395 00:17:59.395 real 0m6.972s 00:17:59.395 user 0m14.987s 00:17:59.395 sys 0m3.610s 00:17:59.395 02:05:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:59.395 02:05:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:59.395 ************************************ 00:17:59.395 END TEST nvmf_bdev_io_wait 00:17:59.395 ************************************ 00:17:59.395 02:05:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:59.395 02:05:04 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:59.395 02:05:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:59.395 02:05:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:59.395 02:05:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:59.395 ************************************ 00:17:59.395 START TEST nvmf_queue_depth 00:17:59.395 ************************************ 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:59.395 * Looking for test storage... 00:17:59.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.395 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:59.396 02:05:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:01.304 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:01.304 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:01.304 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:01.304 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.304 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:01.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:18:01.305 00:18:01.305 --- 10.0.0.2 ping statistics --- 00:18:01.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.305 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:01.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:18:01.305 00:18:01.305 --- 10.0.0.1 ping statistics --- 00:18:01.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.305 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1576555 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1576555 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1576555 ']' 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.305 02:05:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:01.305 [2024-07-14 02:05:06.819218] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:01.305 [2024-07-14 02:05:06.819312] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.305 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.305 [2024-07-14 02:05:06.888745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.305 [2024-07-14 02:05:06.979319] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.305 [2024-07-14 02:05:06.979382] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.305 [2024-07-14 02:05:06.979409] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.305 [2024-07-14 02:05:06.979422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.305 [2024-07-14 02:05:06.979434] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.305 [2024-07-14 02:05:06.979464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:01.564 [2024-07-14 02:05:07.127535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:01.564 Malloc0 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:01.564 [2024-07-14 02:05:07.185555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1576582 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1576582 /var/tmp/bdevperf.sock 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1576582 ']' 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.564 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:01.564 [2024-07-14 02:05:07.232097] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:01.564 [2024-07-14 02:05:07.232162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576582 ] 00:18:01.823 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.823 [2024-07-14 02:05:07.295018] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.823 [2024-07-14 02:05:07.385763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.823 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.823 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:01.823 02:05:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:01.823 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.823 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:02.082 NVMe0n1 00:18:02.082 02:05:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.082 02:05:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:02.341 Running I/O for 10 seconds... 00:18:12.338 00:18:12.338 Latency(us) 00:18:12.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.338 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:12.338 Verification LBA range: start 0x0 length 0x4000 00:18:12.338 NVMe0n1 : 10.09 8372.06 32.70 0.00 0.00 121678.37 24660.95 77672.30 00:18:12.338 =================================================================================================================== 00:18:12.338 Total : 8372.06 32.70 0.00 0.00 121678.37 24660.95 77672.30 00:18:12.338 0 00:18:12.338 02:05:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1576582 00:18:12.338 02:05:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1576582 ']' 00:18:12.338 02:05:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1576582 00:18:12.338 02:05:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:12.339 02:05:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.339 02:05:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1576582 00:18:12.339 02:05:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:12.339 02:05:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:12.339 02:05:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1576582' 00:18:12.339 killing process with pid 1576582 00:18:12.339 02:05:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1576582 00:18:12.339 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.339 00:18:12.339 Latency(us) 00:18:12.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.339 =================================================================================================================== 00:18:12.339 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.339 02:05:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1576582 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:12.599 rmmod nvme_tcp 00:18:12.599 rmmod nvme_fabrics 00:18:12.599 rmmod nvme_keyring 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1576555 ']' 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1576555 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1576555 ']' 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1576555 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1576555 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1576555' 00:18:12.599 killing process with pid 1576555 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1576555 00:18:12.599 02:05:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1576555 00:18:13.167 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:13.167 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:13.167 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:13.167 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.167 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:13.167 02:05:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.167 02:05:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.167 02:05:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.073 02:05:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:15.073 00:18:15.073 real 0m15.972s 00:18:15.073 user 0m22.668s 00:18:15.073 sys 0m2.971s 00:18:15.073 02:05:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:15.073 02:05:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:15.073 ************************************ 00:18:15.073 END TEST nvmf_queue_depth 00:18:15.073 ************************************ 00:18:15.073 02:05:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:15.073 02:05:20 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:15.073 02:05:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:15.073 02:05:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.073 02:05:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:15.073 ************************************ 00:18:15.073 START TEST nvmf_target_multipath 00:18:15.073 ************************************ 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:15.073 * Looking for test storage... 00:18:15.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.073 02:05:20 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:15.074 02:05:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:16.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:16.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:16.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.978 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:16.979 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.979 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.239 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.239 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.239 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:17.239 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.239 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.239 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.239 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:17.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:18:17.239 00:18:17.239 --- 10.0.0.2 ping statistics --- 00:18:17.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.239 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:18:17.239 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:18:17.239 00:18:17.239 --- 10.0.0.1 ping statistics --- 00:18:17.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.239 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:18:17.239 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.239 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:17.239 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:17.239 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:17.240 only one NIC for nvmf test 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.240 rmmod nvme_tcp 00:18:17.240 rmmod nvme_fabrics 00:18:17.240 rmmod nvme_keyring 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.240 02:05:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:19.780 00:18:19.780 real 0m4.251s 00:18:19.780 user 0m0.779s 00:18:19.780 sys 0m1.470s 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:19.780 02:05:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:19.780 ************************************ 00:18:19.780 END TEST nvmf_target_multipath 00:18:19.780 ************************************ 00:18:19.780 02:05:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:19.780 02:05:24 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:19.780 02:05:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:19.780 02:05:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:19.780 02:05:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:19.780 ************************************ 00:18:19.780 START TEST nvmf_zcopy 00:18:19.780 ************************************ 00:18:19.780 02:05:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:19.780 * Looking for test storage... 00:18:19.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:19.780 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:19.781 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:19.781 02:05:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:19.781 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:19.781 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.781 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:19.781 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:19.781 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:19.781 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.781 02:05:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.781 02:05:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.781 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:19.781 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:19.781 02:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:19.781 02:05:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:21.720 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:21.720 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:21.720 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:21.720 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:21.720 02:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:21.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:21.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:18:21.720 00:18:21.720 --- 10.0.0.2 ping statistics --- 00:18:21.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.720 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:21.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:21.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:18:21.720 00:18:21.720 --- 10.0.0.1 ping statistics --- 00:18:21.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.720 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:21.720 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:21.721 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:21.721 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:21.721 02:05:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:21.721 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:21.721 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:21.721 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.721 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1581749 00:18:21.721 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:21.721 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1581749 00:18:21.721 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1581749 ']' 00:18:21.721 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.721 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:21.721 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.721 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:21.721 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.721 [2024-07-14 02:05:27.203550] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:21.721 [2024-07-14 02:05:27.203627] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.721 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.721 [2024-07-14 02:05:27.267912] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.721 [2024-07-14 02:05:27.355037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.721 [2024-07-14 02:05:27.355101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.721 [2024-07-14 02:05:27.355115] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.721 [2024-07-14 02:05:27.355126] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.721 [2024-07-14 02:05:27.355136] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.721 [2024-07-14 02:05:27.355173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.982 [2024-07-14 02:05:27.500232] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.982 [2024-07-14 02:05:27.516409] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.982 malloc0 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:21.982 { 00:18:21.982 "params": { 00:18:21.982 "name": "Nvme$subsystem", 00:18:21.982 "trtype": "$TEST_TRANSPORT", 00:18:21.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:21.982 "adrfam": "ipv4", 00:18:21.982 "trsvcid": "$NVMF_PORT", 00:18:21.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:21.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:21.982 "hdgst": ${hdgst:-false}, 00:18:21.982 "ddgst": ${ddgst:-false} 00:18:21.982 }, 00:18:21.982 "method": "bdev_nvme_attach_controller" 00:18:21.982 } 00:18:21.982 EOF 00:18:21.982 )") 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:21.982 02:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:21.982 "params": { 00:18:21.982 "name": "Nvme1", 00:18:21.982 "trtype": "tcp", 00:18:21.982 "traddr": "10.0.0.2", 00:18:21.982 "adrfam": "ipv4", 00:18:21.982 "trsvcid": "4420", 00:18:21.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.982 "hdgst": false, 00:18:21.982 "ddgst": false 00:18:21.982 }, 00:18:21.982 "method": "bdev_nvme_attach_controller" 00:18:21.982 }' 00:18:21.982 [2024-07-14 02:05:27.600212] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:21.982 [2024-07-14 02:05:27.600293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581772 ] 00:18:21.982 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.982 [2024-07-14 02:05:27.670480] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.243 [2024-07-14 02:05:27.768194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.502 Running I/O for 10 seconds... 00:18:32.484 00:18:32.484 Latency(us) 00:18:32.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.484 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:32.484 Verification LBA range: start 0x0 length 0x1000 00:18:32.484 Nvme1n1 : 10.01 5961.60 46.58 0.00 0.00 21411.72 2985.53 32039.82 00:18:32.484 =================================================================================================================== 00:18:32.484 Total : 5961.60 46.58 0.00 0.00 21411.72 2985.53 32039.82 00:18:32.744 02:05:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1583019 00:18:32.744 02:05:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:32.744 02:05:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:32.744 02:05:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:32.744 02:05:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:32.744 02:05:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:32.744 02:05:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:32.744 02:05:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:32.744 02:05:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:32.744 { 00:18:32.744 "params": { 00:18:32.744 "name": "Nvme$subsystem", 00:18:32.744 "trtype": "$TEST_TRANSPORT", 00:18:32.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:32.744 "adrfam": "ipv4", 00:18:32.744 "trsvcid": "$NVMF_PORT", 00:18:32.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:32.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:32.744 "hdgst": ${hdgst:-false}, 00:18:32.744 "ddgst": ${ddgst:-false} 00:18:32.744 }, 00:18:32.744 "method": "bdev_nvme_attach_controller" 00:18:32.744 } 00:18:32.744 EOF 00:18:32.745 )") 00:18:32.745 02:05:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:32.745 [2024-07-14 02:05:38.353121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.745 [2024-07-14 02:05:38.353165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.745 02:05:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:32.745 02:05:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:32.745 02:05:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:32.745 "params": { 00:18:32.745 "name": "Nvme1", 00:18:32.745 "trtype": "tcp", 00:18:32.745 "traddr": "10.0.0.2", 00:18:32.745 "adrfam": "ipv4", 00:18:32.745 "trsvcid": "4420", 00:18:32.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:32.745 "hdgst": false, 00:18:32.745 "ddgst": false 00:18:32.745 }, 00:18:32.745 "method": "bdev_nvme_attach_controller" 00:18:32.745 }' 00:18:32.745 [2024-07-14 02:05:38.361074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.745 [2024-07-14 02:05:38.361100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.745 [2024-07-14 02:05:38.369097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.745 [2024-07-14 02:05:38.369121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.745 [2024-07-14 02:05:38.377116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.745 [2024-07-14 02:05:38.377139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.745 [2024-07-14 02:05:38.385139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.745 [2024-07-14 02:05:38.385189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.745 [2024-07-14 02:05:38.393159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.745 [2024-07-14 02:05:38.393203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.745 [2024-07-14 02:05:38.393969] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:32.745 [2024-07-14 02:05:38.394045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583019 ] 00:18:32.745 [2024-07-14 02:05:38.401199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.745 [2024-07-14 02:05:38.401233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.745 [2024-07-14 02:05:38.409224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.745 [2024-07-14 02:05:38.409250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.745 [2024-07-14 02:05:38.417254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.745 [2024-07-14 02:05:38.417279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.745 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.745 [2024-07-14 02:05:38.425270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.745 [2024-07-14 02:05:38.425294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.745 [2024-07-14 02:05:38.433301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.745 [2024-07-14 02:05:38.433326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.441325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.441350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.449348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.449373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.457371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.457395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.460411] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.005 [2024-07-14 02:05:38.465410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.465439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.473448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.473487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.481443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.481469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.489463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.489489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.497483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.497509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.505505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.505531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.513546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.513577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.521579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.521618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.529557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.529579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.537579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.537601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.545601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.545633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.553623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.553644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.555347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.005 [2024-07-14 02:05:38.561643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.561665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.569686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.569714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.577724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.577761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.585748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.585785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.593774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.593811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.601798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.601839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.609811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.609862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.617834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.617897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.625818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.625840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.633899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.633952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.641924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.641964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.649928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.649961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.657940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.657964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.665957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.665981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.673965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.673987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.681997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.682023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.005 [2024-07-14 02:05:38.690019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.005 [2024-07-14 02:05:38.690044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.698043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.698069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.706062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.706086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.714084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.714106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.722105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.722128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.730127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.730163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.738165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.738187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.746187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.746225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.754233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.754256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.762245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.762267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.770266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.770288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.778269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.778290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.786305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.786326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.794327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.794348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.802357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.802381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.810375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.810396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.818398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.818419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.826421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.826442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.834444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.834464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.842469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.842490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.850490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.850513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.858511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.858532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.866533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.866554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.874554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.874575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.882576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.882598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.890600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.890622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.898629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.898654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.906644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.906667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 Running I/O for 5 seconds... 00:18:33.266 [2024-07-14 02:05:38.914666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.914688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.927817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.927847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.938531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.938560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.266 [2024-07-14 02:05:38.949350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.266 [2024-07-14 02:05:38.949380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:38.959999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:38.960034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:38.970518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:38.970547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:38.981401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:38.981430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:38.992145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:38.992173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.002713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.002741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.013010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.013038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.023498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.023533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.034187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.034215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.047549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.047577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.057301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.057329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.068077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.068105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.078354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.078382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.087924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.087953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.098927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.098955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.109317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.109345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.119754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.119781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.132333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.132361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.141997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.142026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.152962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.152989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.163574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.163601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.174464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.174493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.185087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.185115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.196353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.196381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.525 [2024-07-14 02:05:39.206828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.525 [2024-07-14 02:05:39.206856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.217628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.217658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.228354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.228389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.239495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.239523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.250649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.250678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.261339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.261367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.272121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.272149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.282786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.282815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.295187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.295216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.304360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.304387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.315232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.315259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.325512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.325541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.336125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.336153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.347013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.347042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.358143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.358171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.371171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.371200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.381262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.381292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.392957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.392987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.404431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.404459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.417059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.417087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.426584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.426613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.437293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.437338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.447385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.447413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.458498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.458527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.784 [2024-07-14 02:05:39.469093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.784 [2024-07-14 02:05:39.469121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.479738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.479767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.490374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.490403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.501050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.501079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.511473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.511501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.521669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.521697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.532095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.532123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.542240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.542268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.552902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.552937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.563558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.563586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.576031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.576060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.585501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.585529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.597096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.597124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.609808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.609837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.619403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.619432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.630678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.630706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.640825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.640859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.651240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.651269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.661564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.661593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.672725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.672754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.683755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.683783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.694765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.694794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.705398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.705426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.716146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.716175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.045 [2024-07-14 02:05:39.727266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.045 [2024-07-14 02:05:39.727294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.738266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.738295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.749390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.749420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.760653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.760681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.771879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.771923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.782938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.782966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.793700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.793731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.806467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.806498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.816372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.816404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.828269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.828314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.839201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.839229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.849753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.849782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.861078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.861107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.872095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.872123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.883104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.883132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.896128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.896156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.906900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.906928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.918062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.918089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.929080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.929109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.940319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.940349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.951762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.951792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.963252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.963283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.974181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.974210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.304 [2024-07-14 02:05:39.985502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.304 [2024-07-14 02:05:39.985533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.564 [2024-07-14 02:05:39.997005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.564 [2024-07-14 02:05:39.997033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.564 [2024-07-14 02:05:40.007982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.564 [2024-07-14 02:05:40.008010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.564 [2024-07-14 02:05:40.019049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.564 [2024-07-14 02:05:40.019091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.564 [2024-07-14 02:05:40.032200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.564 [2024-07-14 02:05:40.032230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.564 [2024-07-14 02:05:40.042332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.564 [2024-07-14 02:05:40.042368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.564 [2024-07-14 02:05:40.054189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.564 [2024-07-14 02:05:40.054218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.564 [2024-07-14 02:05:40.065359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.564 [2024-07-14 02:05:40.065389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.564 [2024-07-14 02:05:40.076956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.564 [2024-07-14 02:05:40.076984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.564 [2024-07-14 02:05:40.087727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.564 [2024-07-14 02:05:40.087758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.565 [2024-07-14 02:05:40.101006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.565 [2024-07-14 02:05:40.101034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.565 [2024-07-14 02:05:40.110718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.565 [2024-07-14 02:05:40.110748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.565 [2024-07-14 02:05:40.122550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.565 [2024-07-14 02:05:40.122580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.565 [2024-07-14 02:05:40.135050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.565 [2024-07-14 02:05:40.135078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.565 [2024-07-14 02:05:40.145161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.565 [2024-07-14 02:05:40.145189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.565 [2024-07-14 02:05:40.156652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.565 [2024-07-14 02:05:40.156680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.565 [2024-07-14 02:05:40.167449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.565 [2024-07-14 02:05:40.167478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.565 [2024-07-14 02:05:40.178149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.565 [2024-07-14 02:05:40.178177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.565 [2024-07-14 02:05:40.190395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.565 [2024-07-14 02:05:40.190424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.565 [2024-07-14 02:05:40.199723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.565 [2024-07-14 02:05:40.199751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.565 [2024-07-14 02:05:40.210520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.565 [2024-07-14 02:05:40.210548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.565 [2024-07-14 02:05:40.220997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.565 [2024-07-14 02:05:40.221026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.565 [2024-07-14 02:05:40.233085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.565 [2024-07-14 02:05:40.233114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.565 [2024-07-14 02:05:40.242518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.565 [2024-07-14 02:05:40.242546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.565 [2024-07-14 02:05:40.255449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.565 [2024-07-14 02:05:40.255477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.265186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.265214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.276056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.276084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.288499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.288527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.297749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.297777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.310332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.310360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.320502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.320530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.330768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.330796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.343612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.343640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.354807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.354836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.363693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.363721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.374732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.374761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.387046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.387075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.396091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.396119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.407115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.407144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.417526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.417555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.427508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.427536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.825 [2024-07-14 02:05:40.438041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.825 [2024-07-14 02:05:40.438068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.826 [2024-07-14 02:05:40.448509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.826 [2024-07-14 02:05:40.448538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.826 [2024-07-14 02:05:40.458781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.826 [2024-07-14 02:05:40.458808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.826 [2024-07-14 02:05:40.469074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.826 [2024-07-14 02:05:40.469103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.826 [2024-07-14 02:05:40.479619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.826 [2024-07-14 02:05:40.479648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.826 [2024-07-14 02:05:40.490111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.826 [2024-07-14 02:05:40.490138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.826 [2024-07-14 02:05:40.500609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.826 [2024-07-14 02:05:40.500637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.826 [2024-07-14 02:05:40.510086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.826 [2024-07-14 02:05:40.510115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.521167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.521196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.531597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.531625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.541936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.541965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.554711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.554740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.565945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.565975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.575219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.575248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.586722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.586750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.597282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.597310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.607876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.607903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.620467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.620495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.630191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.630219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.641381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.641409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.652329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.652358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.662737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.662768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.674699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.674735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.683933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.683960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.086 [2024-07-14 02:05:40.695034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.086 [2024-07-14 02:05:40.695062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.087 [2024-07-14 02:05:40.707416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.087 [2024-07-14 02:05:40.707444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.087 [2024-07-14 02:05:40.719051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.087 [2024-07-14 02:05:40.719078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.087 [2024-07-14 02:05:40.728149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.087 [2024-07-14 02:05:40.728176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.087 [2024-07-14 02:05:40.739428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.087 [2024-07-14 02:05:40.739454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.087 [2024-07-14 02:05:40.751514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.087 [2024-07-14 02:05:40.751541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.087 [2024-07-14 02:05:40.760499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.087 [2024-07-14 02:05:40.760526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.087 [2024-07-14 02:05:40.771038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.087 [2024-07-14 02:05:40.771065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.783434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.783462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.793134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.793162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.803988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.804016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.814122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.814150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.824507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.824535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.834958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.834986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.845158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.845185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.855278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.855306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.866064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.866091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.877030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.877065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.888208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.888238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.899223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.899251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.911951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.911978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.921798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.921826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.933073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.933101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.943660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.943687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.954683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.954713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.965957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.965984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.977288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.977316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.988231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.988258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:40.999171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:40.999198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:41.009959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:41.009986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:41.021036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:41.021063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.347 [2024-07-14 02:05:41.031830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.347 [2024-07-14 02:05:41.031858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.607 [2024-07-14 02:05:41.042446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.607 [2024-07-14 02:05:41.042474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.607 [2024-07-14 02:05:41.053141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.607 [2024-07-14 02:05:41.053169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.607 [2024-07-14 02:05:41.063852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.607 [2024-07-14 02:05:41.063905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.607 [2024-07-14 02:05:41.076304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.607 [2024-07-14 02:05:41.076347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.607 [2024-07-14 02:05:41.085574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.607 [2024-07-14 02:05:41.085609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.607 [2024-07-14 02:05:41.096912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.607 [2024-07-14 02:05:41.096939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.607 [2024-07-14 02:05:41.107649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.607 [2024-07-14 02:05:41.107676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.607 [2024-07-14 02:05:41.118339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.607 [2024-07-14 02:05:41.118367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.607 [2024-07-14 02:05:41.129160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.607 [2024-07-14 02:05:41.129188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.607 [2024-07-14 02:05:41.139923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.607 [2024-07-14 02:05:41.139951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.607 [2024-07-14 02:05:41.150621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.607 [2024-07-14 02:05:41.150648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.607 [2024-07-14 02:05:41.163211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.607 [2024-07-14 02:05:41.163239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.607 [2024-07-14 02:05:41.173099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.607 [2024-07-14 02:05:41.173126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.607 [2024-07-14 02:05:41.183970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.607 [2024-07-14 02:05:41.183998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.607 [2024-07-14 02:05:41.196429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.607 [2024-07-14 02:05:41.196456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.608 [2024-07-14 02:05:41.206341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.608 [2024-07-14 02:05:41.206368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.608 [2024-07-14 02:05:41.217947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.608 [2024-07-14 02:05:41.217974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.608 [2024-07-14 02:05:41.228699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.608 [2024-07-14 02:05:41.228728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.608 [2024-07-14 02:05:41.239667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.608 [2024-07-14 02:05:41.239695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.608 [2024-07-14 02:05:41.251942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.608 [2024-07-14 02:05:41.251969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.608 [2024-07-14 02:05:41.261727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.608 [2024-07-14 02:05:41.261754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.608 [2024-07-14 02:05:41.273094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.608 [2024-07-14 02:05:41.273121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.608 [2024-07-14 02:05:41.284179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.608 [2024-07-14 02:05:41.284206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.608 [2024-07-14 02:05:41.295360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.608 [2024-07-14 02:05:41.295395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.306175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.306203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.316716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.316744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.327329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.327357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.338907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.338934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.349797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.349827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.360452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.360479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.373139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.373165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.383521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.383548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.394792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.394819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.405614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.405641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.416035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.416062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.426846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.426882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.439208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.439235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.448686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.448713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.460064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.460091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.470713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.470739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.481554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.481581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.494069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.494096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.503722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.503749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.514836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.514863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.525533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.525561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.536031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.536059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.546620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.546647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-07-14 02:05:41.559189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-07-14 02:05:41.559216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.568932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.568960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.580277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.580304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.591470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.591498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.602307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.602334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.615108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.615135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.624544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.624571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.635774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.635801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.646430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.646457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.657249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.657276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.670285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.670311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.679768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.679795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.690398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.690426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.701283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.701310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.714051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.714079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.723903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.723942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.734890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.734929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.745364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.745392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.755567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.755595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.767253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.767280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.777273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.777300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.788841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.788883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.799779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.799808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.127 [2024-07-14 02:05:41.810622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.127 [2024-07-14 02:05:41.810650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.387 [2024-07-14 02:05:41.821663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.387 [2024-07-14 02:05:41.821691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.387 [2024-07-14 02:05:41.832438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.387 [2024-07-14 02:05:41.832465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.387 [2024-07-14 02:05:41.845041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.387 [2024-07-14 02:05:41.845070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.387 [2024-07-14 02:05:41.854686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.387 [2024-07-14 02:05:41.854714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.387 [2024-07-14 02:05:41.865890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.387 [2024-07-14 02:05:41.865920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:41.876348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:41.876376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:41.886810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:41.886837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:41.897049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:41.897076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:41.907478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:41.907505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:41.918059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:41.918086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:41.928411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:41.928437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:41.940775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:41.940802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:41.949506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:41.949534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:41.960597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:41.960623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:41.973002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:41.973029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:41.982129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:41.982157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:41.994834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:41.994861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:42.005547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:42.005573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:42.014810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:42.014837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:42.026148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:42.026175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:42.038248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:42.038275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:42.047577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:42.047604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:42.057952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:42.057979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:42.068027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:42.068054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.388 [2024-07-14 02:05:42.078408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.388 [2024-07-14 02:05:42.078435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.648 [2024-07-14 02:05:42.088808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.648 [2024-07-14 02:05:42.088836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.648 [2024-07-14 02:05:42.099013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.648 [2024-07-14 02:05:42.099040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.648 [2024-07-14 02:05:42.109373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.648 [2024-07-14 02:05:42.109400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.648 [2024-07-14 02:05:42.119569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.648 [2024-07-14 02:05:42.119596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.648 [2024-07-14 02:05:42.130100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.648 [2024-07-14 02:05:42.130127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.648 [2024-07-14 02:05:42.139880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.648 [2024-07-14 02:05:42.139907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.648 [2024-07-14 02:05:42.150846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.648 [2024-07-14 02:05:42.150883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.648 [2024-07-14 02:05:42.161060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.648 [2024-07-14 02:05:42.161087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.648 [2024-07-14 02:05:42.171630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.648 [2024-07-14 02:05:42.171658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.648 [2024-07-14 02:05:42.181913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.648 [2024-07-14 02:05:42.181941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-07-14 02:05:42.191955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-07-14 02:05:42.191983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-07-14 02:05:42.201825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-07-14 02:05:42.201851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-07-14 02:05:42.211350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-07-14 02:05:42.211378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-07-14 02:05:42.222284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-07-14 02:05:42.222312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-07-14 02:05:42.234691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-07-14 02:05:42.234718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-07-14 02:05:42.243778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-07-14 02:05:42.243807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-07-14 02:05:42.254776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-07-14 02:05:42.254804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-07-14 02:05:42.264968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-07-14 02:05:42.264996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-07-14 02:05:42.275539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-07-14 02:05:42.275566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-07-14 02:05:42.286192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-07-14 02:05:42.286220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-07-14 02:05:42.298351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-07-14 02:05:42.298378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-07-14 02:05:42.308160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-07-14 02:05:42.308195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-07-14 02:05:42.318852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-07-14 02:05:42.318886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-07-14 02:05:42.330612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-07-14 02:05:42.330639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.340268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.340296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.351427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.351455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.361416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.361443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.371852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.371888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.384201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.384228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.393713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.393740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.404554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.404580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.414620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.414647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.424872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.424899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.435239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.435266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.445582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.445609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.455816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.455843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.466578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.466605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.476873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.476900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.487383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.487410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.500109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.500136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.511065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.511099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.519816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.519844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.909 [2024-07-14 02:05:42.530762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.909 [2024-07-14 02:05:42.530789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.910 [2024-07-14 02:05:42.541039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.910 [2024-07-14 02:05:42.541067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.910 [2024-07-14 02:05:42.551860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.910 [2024-07-14 02:05:42.551920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.910 [2024-07-14 02:05:42.564588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.910 [2024-07-14 02:05:42.564615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.910 [2024-07-14 02:05:42.574308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.910 [2024-07-14 02:05:42.574335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.910 [2024-07-14 02:05:42.585741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.910 [2024-07-14 02:05:42.585768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.910 [2024-07-14 02:05:42.596371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.910 [2024-07-14 02:05:42.596398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.168 [2024-07-14 02:05:42.607132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.168 [2024-07-14 02:05:42.607160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.168 [2024-07-14 02:05:42.619519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.168 [2024-07-14 02:05:42.619547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.168 [2024-07-14 02:05:42.628811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.168 [2024-07-14 02:05:42.628840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.639948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.639975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.651185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.651212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.662143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.662170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.672911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.672939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.683512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.683540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.694280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.694307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.704928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.704955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.715374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.715408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.726047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.726074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.736942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.736969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.749341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.749368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.759405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.759432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.770778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.770805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.780966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.780993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.792506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.792534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.803387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.803431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.814141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.814168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.824656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.824685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.835571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.835599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.846622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.846650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.169 [2024-07-14 02:05:42.857544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.169 [2024-07-14 02:05:42.857572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:42.868473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:42.868501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:42.879486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:42.879514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:42.890328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:42.890356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:42.901114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:42.901142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:42.911955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:42.911982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:42.923045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:42.923079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:42.933584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:42.933628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:42.944533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:42.944571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:42.955160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:42.955188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:42.967952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:42.967979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:42.977609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:42.977637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:42.989090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:42.989117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:42.999998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:43.000025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:43.010842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:43.010877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:43.021594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:43.021621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:43.032446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:43.032472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:43.043267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:43.043295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:43.058102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:43.058132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:43.067956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:43.067997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:43.078809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:43.078837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:43.090083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:43.090110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:43.101113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:43.101140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-07-14 02:05:43.113861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-07-14 02:05:43.113900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.123104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.123132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.134506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.134533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.145240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.145267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.156010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.156037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.166815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.166842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.177523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.177550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.188384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.188411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.198939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.198966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.209706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.209733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.220803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.220831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.231807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.231834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.244760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.244787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.254789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.254817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.265302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.265330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.278064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.278092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.287801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.287829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.299311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.299354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.310252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.310279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.320842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.320877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.331522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.331549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.342703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.342730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.699 [2024-07-14 02:05:43.353715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.699 [2024-07-14 02:05:43.353743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.700 [2024-07-14 02:05:43.364411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.700 [2024-07-14 02:05:43.364438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.700 [2024-07-14 02:05:43.374886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.700 [2024-07-14 02:05:43.374918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-07-14 02:05:43.385696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-07-14 02:05:43.385725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.396252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.396279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.408819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.408846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.418482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.418509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.429719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.429747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.440272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.440300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.450625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.450652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.461427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.461454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.472288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.472316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.483215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.483243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.494035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.494062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.504303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.504330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.514622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.514649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.525231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.525258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.535846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.535882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.546325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.546355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.556926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.556953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.569173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.569200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.578513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.578540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.589558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.589585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.600126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.600153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.612210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.612238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.621802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.621829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.632189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.632217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.963 [2024-07-14 02:05:43.642563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.963 [2024-07-14 02:05:43.642590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.655252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.655281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.664597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.664624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.675800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.675827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.686598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.686625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.697175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.697202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.709742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.709769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.719086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.719113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.731563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.731591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.741424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.741451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.751841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.751876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.762047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.762074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.772268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.772296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.782894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.782921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.795322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.795349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.804272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.804299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.815251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.815277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.825421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.825448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.835413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.835440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.845521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.845548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.855683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.855710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.865976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.866003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.876179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.876206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.886451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.886478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.896941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.896969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.223 [2024-07-14 02:05:43.909372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.223 [2024-07-14 02:05:43.909399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:43.918650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:43.918679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:43.928392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:43.928418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 00:18:38.484 Latency(us) 00:18:38.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.484 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:38.484 Nvme1n1 : 5.01 11931.68 93.22 0.00 0.00 10713.64 4636.07 23884.23 00:18:38.484 =================================================================================================================== 00:18:38.484 Total : 11931.68 93.22 0.00 0.00 10713.64 4636.07 23884.23 00:18:38.484 [2024-07-14 02:05:43.933982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:43.934006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:43.941990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:43.942015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:43.950043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:43.950078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:43.958100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:43.958152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:43.966115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:43.966161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:43.974139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:43.974190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:43.982162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:43.982207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:43.990196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:43.990243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:43.998218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:43.998264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:44.006239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:44.006291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:44.014245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:44.014295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:44.022273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:44.022318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:44.030296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:44.030345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:44.038312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:44.038356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:44.046332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:44.046381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:44.054362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:44.054416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:44.062352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:44.062403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:44.070339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:44.070363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:44.078409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:44.078454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:44.086437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:44.086481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:44.094465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:44.094509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:44.102421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:44.102445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:44.110490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:44.110534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:44.118524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:44.118568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.484 [2024-07-14 02:05:44.126535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.484 [2024-07-14 02:05:44.126577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.485 [2024-07-14 02:05:44.134498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.485 [2024-07-14 02:05:44.134518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.485 [2024-07-14 02:05:44.142520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.485 [2024-07-14 02:05:44.142540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.485 [2024-07-14 02:05:44.150541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.485 [2024-07-14 02:05:44.150561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1583019) - No such process 00:18:38.485 02:05:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1583019 00:18:38.485 02:05:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:38.485 02:05:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.485 02:05:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:38.485 02:05:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.485 02:05:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:38.485 02:05:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.485 02:05:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:38.485 delay0 00:18:38.485 02:05:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.485 02:05:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:38.485 02:05:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.485 02:05:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:38.745 02:05:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.745 02:05:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:38.745 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.745 [2024-07-14 02:05:44.237488] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:45.340 Initializing NVMe Controllers 00:18:45.340 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:45.340 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:45.340 Initialization complete. Launching workers. 00:18:45.340 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 82 00:18:45.340 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 369, failed to submit 33 00:18:45.340 success 158, unsuccess 211, failed 0 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:45.340 rmmod nvme_tcp 00:18:45.340 rmmod nvme_fabrics 00:18:45.340 rmmod nvme_keyring 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1581749 ']' 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1581749 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1581749 ']' 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1581749 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1581749 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1581749' 00:18:45.340 killing process with pid 1581749 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1581749 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1581749 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.340 02:05:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.243 02:05:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:47.243 00:18:47.244 real 0m27.742s 00:18:47.244 user 0m41.118s 00:18:47.244 sys 0m8.225s 00:18:47.244 02:05:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:47.244 02:05:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:47.244 ************************************ 00:18:47.244 END TEST nvmf_zcopy 00:18:47.244 ************************************ 00:18:47.244 02:05:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:47.244 02:05:52 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:47.244 02:05:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:47.244 02:05:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.244 02:05:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:47.244 ************************************ 00:18:47.244 START TEST nvmf_nmic 00:18:47.244 ************************************ 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:47.244 * Looking for test storage... 00:18:47.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:47.244 02:05:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:49.144 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:49.144 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:49.145 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:49.145 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:49.145 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:49.145 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.145 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:49.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:18:49.403 00:18:49.403 --- 10.0.0.2 ping statistics --- 00:18:49.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.403 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:49.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:18:49.403 00:18:49.403 --- 10.0.0.1 ping statistics --- 00:18:49.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.403 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1586330 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1586330 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1586330 ']' 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.403 02:05:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:49.403 [2024-07-14 02:05:55.041741] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:49.403 [2024-07-14 02:05:55.041817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.403 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.661 [2024-07-14 02:05:55.113276] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.661 [2024-07-14 02:05:55.208505] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.661 [2024-07-14 02:05:55.208569] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.661 [2024-07-14 02:05:55.208586] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.661 [2024-07-14 02:05:55.208600] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.661 [2024-07-14 02:05:55.208612] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.661 [2024-07-14 02:05:55.208744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.661 [2024-07-14 02:05:55.209050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.661 [2024-07-14 02:05:55.209075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.661 [2024-07-14 02:05:55.209077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.661 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.661 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:18:49.661 02:05:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:49.661 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:49.661 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:49.661 02:05:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.661 02:05:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:49.661 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.661 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:49.920 [2024-07-14 02:05:55.355535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:49.920 Malloc0 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:49.920 [2024-07-14 02:05:55.406467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:49.920 test case1: single bdev can't be used in multiple subsystems 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:49.920 [2024-07-14 02:05:55.430331] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:49.920 [2024-07-14 02:05:55.430359] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:49.920 [2024-07-14 02:05:55.430389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.920 request: 00:18:49.920 { 00:18:49.920 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:49.920 "namespace": { 00:18:49.920 "bdev_name": "Malloc0", 00:18:49.920 "no_auto_visible": false 00:18:49.920 }, 00:18:49.920 "method": "nvmf_subsystem_add_ns", 00:18:49.920 "req_id": 1 00:18:49.920 } 00:18:49.920 Got JSON-RPC error response 00:18:49.920 response: 00:18:49.920 { 00:18:49.920 "code": -32602, 00:18:49.920 "message": "Invalid parameters" 00:18:49.920 } 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:49.920 Adding namespace failed - expected result. 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:49.920 test case2: host connect to nvmf target in multiple paths 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:49.920 [2024-07-14 02:05:55.438456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.920 02:05:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:50.488 02:05:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:51.055 02:05:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:51.055 02:05:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:18:51.055 02:05:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.055 02:05:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:51.055 02:05:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:53.585 02:05:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:53.585 02:05:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:53.585 02:05:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:53.585 02:05:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:53.585 02:05:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.585 02:05:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:53.585 02:05:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:53.585 [global] 00:18:53.585 thread=1 00:18:53.585 invalidate=1 00:18:53.585 rw=write 00:18:53.585 time_based=1 00:18:53.585 runtime=1 00:18:53.585 ioengine=libaio 00:18:53.585 direct=1 00:18:53.585 bs=4096 00:18:53.585 iodepth=1 00:18:53.585 norandommap=0 00:18:53.585 numjobs=1 00:18:53.585 00:18:53.585 verify_dump=1 00:18:53.585 verify_backlog=512 00:18:53.585 verify_state_save=0 00:18:53.585 do_verify=1 00:18:53.585 verify=crc32c-intel 00:18:53.585 [job0] 00:18:53.585 filename=/dev/nvme0n1 00:18:53.585 Could not set queue depth (nvme0n1) 00:18:53.585 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:53.585 fio-3.35 00:18:53.585 Starting 1 thread 00:18:54.520 00:18:54.520 job0: (groupid=0, jobs=1): err= 0: pid=1586964: Sun Jul 14 02:06:00 2024 00:18:54.520 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:54.520 slat (nsec): min=7686, max=54974, avg=17329.91, stdev=4387.35 00:18:54.520 clat (usec): min=388, max=759, avg=509.58, stdev=45.30 00:18:54.520 lat (usec): min=397, max=792, avg=526.91, stdev=46.49 00:18:54.520 clat percentiles (usec): 00:18:54.520 | 1.00th=[ 416], 5.00th=[ 445], 10.00th=[ 461], 20.00th=[ 469], 00:18:54.520 | 30.00th=[ 478], 40.00th=[ 486], 50.00th=[ 494], 60.00th=[ 537], 00:18:54.520 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 562], 95.00th=[ 570], 00:18:54.520 | 99.00th=[ 611], 99.50th=[ 644], 99.90th=[ 693], 99.95th=[ 758], 00:18:54.520 | 99.99th=[ 758] 00:18:54.520 write: IOPS=1451, BW=5806KiB/s (5946kB/s)(5812KiB/1001msec); 0 zone resets 00:18:54.520 slat (usec): min=7, max=28891, avg=38.34, stdev=757.52 00:18:54.520 clat (usec): min=190, max=1206, avg=270.22, stdev=78.97 00:18:54.520 lat (usec): min=198, max=29393, avg=308.55, stdev=768.25 00:18:54.520 clat percentiles (usec): 00:18:54.520 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:18:54.520 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 243], 60.00th=[ 262], 00:18:54.520 | 70.00th=[ 285], 80.00th=[ 330], 90.00th=[ 396], 95.00th=[ 420], 00:18:54.520 | 99.00th=[ 441], 99.50th=[ 457], 99.90th=[ 1074], 99.95th=[ 1205], 00:18:54.520 | 99.99th=[ 1205] 00:18:54.520 bw ( KiB/s): min= 4904, max= 4904, per=84.46%, avg=4904.00, stdev= 0.00, samples=1 00:18:54.520 iops : min= 1226, max= 1226, avg=1226.00, stdev= 0.00, samples=1 00:18:54.520 lat (usec) : 250=33.02%, 500=46.79%, 750=19.98%, 1000=0.12% 00:18:54.520 lat (msec) : 2=0.08% 00:18:54.520 cpu : usr=3.40%, sys=5.60%, ctx=2480, majf=0, minf=2 00:18:54.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:54.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.520 issued rwts: total=1024,1453,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:54.520 00:18:54.520 Run status group 0 (all jobs): 00:18:54.520 READ: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:18:54.520 WRITE: bw=5806KiB/s (5946kB/s), 5806KiB/s-5806KiB/s (5946kB/s-5946kB/s), io=5812KiB (5951kB), run=1001-1001msec 00:18:54.520 00:18:54.520 Disk stats (read/write): 00:18:54.520 nvme0n1: ios=1050/1063, merge=0/0, ticks=1490/272, in_queue=1762, util=98.70% 00:18:54.520 02:06:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:54.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:54.779 rmmod nvme_tcp 00:18:54.779 rmmod nvme_fabrics 00:18:54.779 rmmod nvme_keyring 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1586330 ']' 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1586330 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1586330 ']' 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1586330 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1586330 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1586330' 00:18:54.779 killing process with pid 1586330 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1586330 00:18:54.779 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1586330 00:18:55.038 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:55.038 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:55.038 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:55.038 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:55.038 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:55.038 02:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.038 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.038 02:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.945 02:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:56.945 00:18:56.945 real 0m9.842s 00:18:56.945 user 0m22.047s 00:18:56.945 sys 0m2.408s 00:18:56.945 02:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:56.945 02:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.945 ************************************ 00:18:56.945 END TEST nvmf_nmic 00:18:56.945 ************************************ 00:18:56.945 02:06:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:56.945 02:06:02 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:56.945 02:06:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:56.945 02:06:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:56.945 02:06:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:56.945 ************************************ 00:18:56.945 START TEST nvmf_fio_target 00:18:56.945 ************************************ 00:18:56.945 02:06:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:57.204 * Looking for test storage... 00:18:57.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.204 02:06:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:57.205 02:06:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:59.112 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:59.112 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:59.112 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:59.112 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:59.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:18:59.112 00:18:59.112 --- 10.0.0.2 ping statistics --- 00:18:59.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.112 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:18:59.112 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:59.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:18:59.112 00:18:59.113 --- 10.0.0.1 ping statistics --- 00:18:59.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.113 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1589130 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1589130 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1589130 ']' 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.113 02:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.113 [2024-07-14 02:06:04.658227] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:59.113 [2024-07-14 02:06:04.658314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.113 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.113 [2024-07-14 02:06:04.727253] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:59.372 [2024-07-14 02:06:04.819700] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.372 [2024-07-14 02:06:04.819761] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.372 [2024-07-14 02:06:04.819775] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.372 [2024-07-14 02:06:04.819786] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.372 [2024-07-14 02:06:04.819796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.372 [2024-07-14 02:06:04.819844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.372 [2024-07-14 02:06:04.819918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:59.372 [2024-07-14 02:06:04.819923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.372 [2024-07-14 02:06:04.819886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.372 02:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:59.372 02:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:18:59.372 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:59.372 02:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:59.372 02:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.372 02:06:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.372 02:06:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:59.630 [2024-07-14 02:06:05.191318] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.630 02:06:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.888 02:06:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:59.888 02:06:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:00.147 02:06:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:00.147 02:06:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:00.406 02:06:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:00.406 02:06:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:00.665 02:06:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:00.665 02:06:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:00.924 02:06:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:01.182 02:06:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:01.182 02:06:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:01.440 02:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:01.440 02:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:01.698 02:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:01.698 02:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:01.957 02:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:02.215 02:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:02.215 02:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:02.474 02:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:02.474 02:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:02.788 02:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:03.046 [2024-07-14 02:06:08.623593] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.046 02:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:03.304 02:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:03.563 02:06:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:04.499 02:06:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:04.499 02:06:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:19:04.499 02:06:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:04.499 02:06:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:19:04.499 02:06:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:19:04.499 02:06:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:19:06.406 02:06:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:06.406 02:06:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:06.406 02:06:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:06.406 02:06:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:19:06.406 02:06:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:06.406 02:06:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:19:06.406 02:06:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:06.406 [global] 00:19:06.406 thread=1 00:19:06.406 invalidate=1 00:19:06.406 rw=write 00:19:06.406 time_based=1 00:19:06.406 runtime=1 00:19:06.406 ioengine=libaio 00:19:06.406 direct=1 00:19:06.406 bs=4096 00:19:06.406 iodepth=1 00:19:06.406 norandommap=0 00:19:06.406 numjobs=1 00:19:06.406 00:19:06.406 verify_dump=1 00:19:06.406 verify_backlog=512 00:19:06.406 verify_state_save=0 00:19:06.406 do_verify=1 00:19:06.406 verify=crc32c-intel 00:19:06.406 [job0] 00:19:06.406 filename=/dev/nvme0n1 00:19:06.406 [job1] 00:19:06.406 filename=/dev/nvme0n2 00:19:06.406 [job2] 00:19:06.406 filename=/dev/nvme0n3 00:19:06.406 [job3] 00:19:06.406 filename=/dev/nvme0n4 00:19:06.406 Could not set queue depth (nvme0n1) 00:19:06.406 Could not set queue depth (nvme0n2) 00:19:06.406 Could not set queue depth (nvme0n3) 00:19:06.406 Could not set queue depth (nvme0n4) 00:19:06.665 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.665 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.665 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.665 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.665 fio-3.35 00:19:06.665 Starting 4 threads 00:19:08.042 00:19:08.042 job0: (groupid=0, jobs=1): err= 0: pid=1590608: Sun Jul 14 02:06:13 2024 00:19:08.042 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:08.042 slat (nsec): min=5875, max=55078, avg=14981.53, stdev=6720.28 00:19:08.042 clat (usec): min=317, max=41974, avg=687.03, stdev=3125.41 00:19:08.042 lat (usec): min=325, max=41987, avg=702.01, stdev=3125.18 00:19:08.042 clat percentiles (usec): 00:19:08.042 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 371], 00:19:08.042 | 30.00th=[ 392], 40.00th=[ 424], 50.00th=[ 445], 60.00th=[ 461], 00:19:08.042 | 70.00th=[ 478], 80.00th=[ 510], 90.00th=[ 586], 95.00th=[ 603], 00:19:08.042 | 99.00th=[ 635], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:19:08.042 | 99.99th=[42206] 00:19:08.042 write: IOPS=1115, BW=4464KiB/s (4571kB/s)(4468KiB/1001msec); 0 zone resets 00:19:08.042 slat (nsec): min=6397, max=53506, avg=11360.54, stdev=5479.24 00:19:08.042 clat (usec): min=188, max=434, avg=233.02, stdev=33.55 00:19:08.042 lat (usec): min=197, max=469, avg=244.38, stdev=34.86 00:19:08.042 clat percentiles (usec): 00:19:08.042 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:19:08.042 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:19:08.042 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 281], 95.00th=[ 310], 00:19:08.042 | 99.00th=[ 363], 99.50th=[ 367], 99.90th=[ 396], 99.95th=[ 437], 00:19:08.042 | 99.99th=[ 437] 00:19:08.042 bw ( KiB/s): min= 4096, max= 4096, per=21.17%, avg=4096.00, stdev= 0.00, samples=1 00:19:08.042 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:08.042 lat (usec) : 250=42.50%, 500=46.94%, 750=10.28% 00:19:08.042 lat (msec) : 50=0.28% 00:19:08.042 cpu : usr=1.60%, sys=4.10%, ctx=2141, majf=0, minf=2 00:19:08.042 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.042 issued rwts: total=1024,1117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.042 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.042 job1: (groupid=0, jobs=1): err= 0: pid=1590609: Sun Jul 14 02:06:13 2024 00:19:08.042 read: IOPS=514, BW=2057KiB/s (2106kB/s)(2092KiB/1017msec) 00:19:08.042 slat (nsec): min=5657, max=62911, avg=18670.23, stdev=9479.62 00:19:08.042 clat (usec): min=366, max=42093, avg=1244.86, stdev=5667.56 00:19:08.042 lat (usec): min=371, max=42124, avg=1263.53, stdev=5666.78 00:19:08.042 clat percentiles (usec): 00:19:08.042 | 1.00th=[ 375], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 424], 00:19:08.042 | 30.00th=[ 441], 40.00th=[ 449], 50.00th=[ 457], 60.00th=[ 469], 00:19:08.042 | 70.00th=[ 478], 80.00th=[ 486], 90.00th=[ 498], 95.00th=[ 510], 00:19:08.042 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:08.042 | 99.99th=[42206] 00:19:08.042 write: IOPS=1006, BW=4028KiB/s (4124kB/s)(4096KiB/1017msec); 0 zone resets 00:19:08.042 slat (nsec): min=6580, max=74971, avg=22443.86, stdev=10795.00 00:19:08.042 clat (usec): min=207, max=643, avg=316.01, stdev=52.27 00:19:08.042 lat (usec): min=228, max=666, avg=338.45, stdev=55.42 00:19:08.042 clat percentiles (usec): 00:19:08.042 | 1.00th=[ 225], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 269], 00:19:08.042 | 30.00th=[ 277], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 326], 00:19:08.042 | 70.00th=[ 338], 80.00th=[ 359], 90.00th=[ 388], 95.00th=[ 408], 00:19:08.042 | 99.00th=[ 457], 99.50th=[ 474], 99.90th=[ 553], 99.95th=[ 644], 00:19:08.042 | 99.99th=[ 644] 00:19:08.042 bw ( KiB/s): min= 2112, max= 6080, per=21.17%, avg=4096.00, stdev=2805.80, samples=2 00:19:08.042 iops : min= 528, max= 1520, avg=1024.00, stdev=701.45, samples=2 00:19:08.042 lat (usec) : 250=3.30%, 500=93.86%, 750=2.20% 00:19:08.042 lat (msec) : 50=0.65% 00:19:08.042 cpu : usr=1.77%, sys=3.44%, ctx=1548, majf=0, minf=1 00:19:08.042 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.042 issued rwts: total=523,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.042 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.042 job2: (groupid=0, jobs=1): err= 0: pid=1590610: Sun Jul 14 02:06:13 2024 00:19:08.042 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:08.042 slat (nsec): min=6161, max=68083, avg=13531.93, stdev=8293.80 00:19:08.042 clat (usec): min=382, max=1337, avg=516.49, stdev=132.17 00:19:08.042 lat (usec): min=390, max=1351, avg=530.02, stdev=137.55 00:19:08.042 clat percentiles (usec): 00:19:08.042 | 1.00th=[ 388], 5.00th=[ 396], 10.00th=[ 400], 20.00th=[ 404], 00:19:08.042 | 30.00th=[ 416], 40.00th=[ 449], 50.00th=[ 494], 60.00th=[ 519], 00:19:08.042 | 70.00th=[ 545], 80.00th=[ 586], 90.00th=[ 693], 95.00th=[ 832], 00:19:08.042 | 99.00th=[ 955], 99.50th=[ 971], 99.90th=[ 1012], 99.95th=[ 1336], 00:19:08.042 | 99.99th=[ 1336] 00:19:08.042 write: IOPS=1372, BW=5491KiB/s (5622kB/s)(5496KiB/1001msec); 0 zone resets 00:19:08.042 slat (nsec): min=8091, max=67655, avg=17359.28, stdev=9992.68 00:19:08.042 clat (usec): min=193, max=3309, avg=308.06, stdev=127.35 00:19:08.042 lat (usec): min=203, max=3349, avg=325.42, stdev=131.88 00:19:08.042 clat percentiles (usec): 00:19:08.042 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:19:08.042 | 30.00th=[ 227], 40.00th=[ 239], 50.00th=[ 269], 60.00th=[ 297], 00:19:08.042 | 70.00th=[ 363], 80.00th=[ 408], 90.00th=[ 457], 95.00th=[ 502], 00:19:08.042 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 644], 99.95th=[ 3294], 00:19:08.042 | 99.99th=[ 3294] 00:19:08.042 bw ( KiB/s): min= 4096, max= 4096, per=21.17%, avg=4096.00, stdev= 0.00, samples=1 00:19:08.042 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:08.042 lat (usec) : 250=24.90%, 500=51.58%, 750=20.18%, 1000=3.21% 00:19:08.042 lat (msec) : 2=0.08%, 4=0.04% 00:19:08.042 cpu : usr=3.30%, sys=4.50%, ctx=2399, majf=0, minf=1 00:19:08.042 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.042 issued rwts: total=1024,1374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.042 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.042 job3: (groupid=0, jobs=1): err= 0: pid=1590611: Sun Jul 14 02:06:13 2024 00:19:08.042 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:08.042 slat (nsec): min=6070, max=44281, avg=12285.83, stdev=6691.84 00:19:08.042 clat (usec): min=327, max=1437, avg=445.69, stdev=100.65 00:19:08.042 lat (usec): min=336, max=1444, avg=457.97, stdev=103.70 00:19:08.042 clat percentiles (usec): 00:19:08.042 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 351], 00:19:08.042 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 453], 60.00th=[ 498], 00:19:08.042 | 70.00th=[ 523], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 570], 00:19:08.042 | 99.00th=[ 594], 99.50th=[ 644], 99.90th=[ 1287], 99.95th=[ 1434], 00:19:08.042 | 99.99th=[ 1434] 00:19:08.042 write: IOPS=1403, BW=5614KiB/s (5749kB/s)(5620KiB/1001msec); 0 zone resets 00:19:08.042 slat (nsec): min=8058, max=70640, avg=20027.75, stdev=11170.74 00:19:08.043 clat (usec): min=225, max=1293, avg=350.27, stdev=69.94 00:19:08.043 lat (usec): min=234, max=1317, avg=370.30, stdev=75.47 00:19:08.043 clat percentiles (usec): 00:19:08.043 | 1.00th=[ 237], 5.00th=[ 251], 10.00th=[ 262], 20.00th=[ 297], 00:19:08.043 | 30.00th=[ 310], 40.00th=[ 330], 50.00th=[ 347], 60.00th=[ 359], 00:19:08.043 | 70.00th=[ 383], 80.00th=[ 412], 90.00th=[ 433], 95.00th=[ 449], 00:19:08.043 | 99.00th=[ 498], 99.50th=[ 515], 99.90th=[ 1074], 99.95th=[ 1287], 00:19:08.043 | 99.99th=[ 1287] 00:19:08.043 bw ( KiB/s): min= 4752, max= 4752, per=24.56%, avg=4752.00, stdev= 0.00, samples=1 00:19:08.043 iops : min= 1188, max= 1188, avg=1188.00, stdev= 0.00, samples=1 00:19:08.043 lat (usec) : 250=2.88%, 500=79.91%, 750=17.00% 00:19:08.043 lat (msec) : 2=0.21% 00:19:08.043 cpu : usr=2.80%, sys=5.50%, ctx=2429, majf=0, minf=1 00:19:08.043 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.043 issued rwts: total=1024,1405,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.043 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.043 00:19:08.043 Run status group 0 (all jobs): 00:19:08.043 READ: bw=13.8MiB/s (14.5MB/s), 2057KiB/s-4092KiB/s (2106kB/s-4190kB/s), io=14.0MiB (14.7MB), run=1001-1017msec 00:19:08.043 WRITE: bw=18.9MiB/s (19.8MB/s), 4028KiB/s-5614KiB/s (4124kB/s-5749kB/s), io=19.2MiB (20.2MB), run=1001-1017msec 00:19:08.043 00:19:08.043 Disk stats (read/write): 00:19:08.043 nvme0n1: ios=781/1024, merge=0/0, ticks=627/229, in_queue=856, util=87.58% 00:19:08.043 nvme0n2: ios=543/1024, merge=0/0, ticks=532/307, in_queue=839, util=91.44% 00:19:08.043 nvme0n3: ios=912/1024, merge=0/0, ticks=1533/328, in_queue=1861, util=98.32% 00:19:08.043 nvme0n4: ios=988/1024, merge=0/0, ticks=705/321, in_queue=1026, util=92.39% 00:19:08.043 02:06:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:08.043 [global] 00:19:08.043 thread=1 00:19:08.043 invalidate=1 00:19:08.043 rw=randwrite 00:19:08.043 time_based=1 00:19:08.043 runtime=1 00:19:08.043 ioengine=libaio 00:19:08.043 direct=1 00:19:08.043 bs=4096 00:19:08.043 iodepth=1 00:19:08.043 norandommap=0 00:19:08.043 numjobs=1 00:19:08.043 00:19:08.043 verify_dump=1 00:19:08.043 verify_backlog=512 00:19:08.043 verify_state_save=0 00:19:08.043 do_verify=1 00:19:08.043 verify=crc32c-intel 00:19:08.043 [job0] 00:19:08.043 filename=/dev/nvme0n1 00:19:08.043 [job1] 00:19:08.043 filename=/dev/nvme0n2 00:19:08.043 [job2] 00:19:08.043 filename=/dev/nvme0n3 00:19:08.043 [job3] 00:19:08.043 filename=/dev/nvme0n4 00:19:08.043 Could not set queue depth (nvme0n1) 00:19:08.043 Could not set queue depth (nvme0n2) 00:19:08.043 Could not set queue depth (nvme0n3) 00:19:08.043 Could not set queue depth (nvme0n4) 00:19:08.043 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:08.043 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:08.043 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:08.043 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:08.043 fio-3.35 00:19:08.043 Starting 4 threads 00:19:09.422 00:19:09.422 job0: (groupid=0, jobs=1): err= 0: pid=1590919: Sun Jul 14 02:06:14 2024 00:19:09.423 read: IOPS=20, BW=81.6KiB/s (83.5kB/s)(84.0KiB/1030msec) 00:19:09.423 slat (nsec): min=15812, max=34721, avg=29207.95, stdev=7992.94 00:19:09.423 clat (usec): min=40850, max=42028, avg=41340.44, stdev=514.57 00:19:09.423 lat (usec): min=40885, max=42062, avg=41369.65, stdev=514.79 00:19:09.423 clat percentiles (usec): 00:19:09.423 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:09.423 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:09.423 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:09.423 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:09.423 | 99.99th=[42206] 00:19:09.423 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:19:09.423 slat (nsec): min=6278, max=64758, avg=17750.22, stdev=9725.97 00:19:09.423 clat (usec): min=208, max=484, avg=291.45, stdev=55.20 00:19:09.423 lat (usec): min=228, max=494, avg=309.20, stdev=55.17 00:19:09.423 clat percentiles (usec): 00:19:09.423 | 1.00th=[ 225], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 251], 00:19:09.423 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 281], 00:19:09.423 | 70.00th=[ 302], 80.00th=[ 330], 90.00th=[ 383], 95.00th=[ 408], 00:19:09.423 | 99.00th=[ 453], 99.50th=[ 465], 99.90th=[ 486], 99.95th=[ 486], 00:19:09.423 | 99.99th=[ 486] 00:19:09.423 bw ( KiB/s): min= 4096, max= 4096, per=51.60%, avg=4096.00, stdev= 0.00, samples=1 00:19:09.423 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:09.423 lat (usec) : 250=17.07%, 500=78.99% 00:19:09.423 lat (msec) : 50=3.94% 00:19:09.423 cpu : usr=0.29%, sys=1.07%, ctx=534, majf=0, minf=1 00:19:09.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.423 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:09.423 job1: (groupid=0, jobs=1): err= 0: pid=1590938: Sun Jul 14 02:06:14 2024 00:19:09.423 read: IOPS=20, BW=81.6KiB/s (83.5kB/s)(84.0KiB/1030msec) 00:19:09.423 slat (nsec): min=15247, max=33986, avg=28666.52, stdev=8109.92 00:19:09.423 clat (usec): min=40900, max=42029, avg=41533.40, stdev=506.92 00:19:09.423 lat (usec): min=40916, max=42048, avg=41562.07, stdev=509.36 00:19:09.423 clat percentiles (usec): 00:19:09.423 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:09.423 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:19:09.423 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:09.423 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:09.423 | 99.99th=[42206] 00:19:09.423 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:19:09.423 slat (nsec): min=6984, max=51822, avg=19826.38, stdev=10377.36 00:19:09.423 clat (usec): min=210, max=453, avg=281.28, stdev=46.13 00:19:09.423 lat (usec): min=219, max=462, avg=301.11, stdev=47.53 00:19:09.423 clat percentiles (usec): 00:19:09.423 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 245], 00:19:09.423 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 277], 00:19:09.423 | 70.00th=[ 289], 80.00th=[ 310], 90.00th=[ 359], 95.00th=[ 392], 00:19:09.423 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 453], 99.95th=[ 453], 00:19:09.423 | 99.99th=[ 453] 00:19:09.423 bw ( KiB/s): min= 4096, max= 4096, per=51.60%, avg=4096.00, stdev= 0.00, samples=1 00:19:09.423 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:09.423 lat (usec) : 250=25.33%, 500=70.73% 00:19:09.423 lat (msec) : 50=3.94% 00:19:09.423 cpu : usr=0.39%, sys=1.26%, ctx=534, majf=0, minf=2 00:19:09.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.423 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:09.423 job2: (groupid=0, jobs=1): err= 0: pid=1590959: Sun Jul 14 02:06:14 2024 00:19:09.423 read: IOPS=211, BW=847KiB/s (867kB/s)(848KiB/1001msec) 00:19:09.423 slat (nsec): min=7320, max=34698, avg=10299.48, stdev=6959.80 00:19:09.423 clat (usec): min=389, max=42638, avg=3937.47, stdev=11386.41 00:19:09.423 lat (usec): min=397, max=42671, avg=3947.77, stdev=11392.77 00:19:09.423 clat percentiles (usec): 00:19:09.423 | 1.00th=[ 396], 5.00th=[ 404], 10.00th=[ 412], 20.00th=[ 433], 00:19:09.423 | 30.00th=[ 437], 40.00th=[ 441], 50.00th=[ 441], 60.00th=[ 445], 00:19:09.423 | 70.00th=[ 445], 80.00th=[ 449], 90.00th=[ 627], 95.00th=[41157], 00:19:09.423 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:19:09.423 | 99.99th=[42730] 00:19:09.423 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:09.423 slat (nsec): min=7587, max=57875, avg=18623.74, stdev=10169.29 00:19:09.423 clat (usec): min=219, max=507, avg=294.63, stdev=51.59 00:19:09.423 lat (usec): min=227, max=544, avg=313.25, stdev=54.92 00:19:09.423 clat percentiles (usec): 00:19:09.423 | 1.00th=[ 225], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 255], 00:19:09.423 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 293], 00:19:09.423 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 347], 95.00th=[ 408], 00:19:09.423 | 99.00th=[ 482], 99.50th=[ 498], 99.90th=[ 506], 99.95th=[ 506], 00:19:09.423 | 99.99th=[ 506] 00:19:09.423 bw ( KiB/s): min= 4096, max= 4096, per=51.60%, avg=4096.00, stdev= 0.00, samples=1 00:19:09.423 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:09.423 lat (usec) : 250=10.64%, 500=85.64%, 750=0.97%, 1000=0.14% 00:19:09.423 lat (msec) : 10=0.14%, 50=2.49% 00:19:09.423 cpu : usr=1.30%, sys=1.00%, ctx=725, majf=0, minf=1 00:19:09.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.423 issued rwts: total=212,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:09.423 job3: (groupid=0, jobs=1): err= 0: pid=1590961: Sun Jul 14 02:06:14 2024 00:19:09.423 read: IOPS=24, BW=96.9KiB/s (99.2kB/s)(100KiB/1032msec) 00:19:09.423 slat (nsec): min=15369, max=37605, avg=29217.16, stdev=7408.90 00:19:09.423 clat (usec): min=474, max=41462, avg=34511.49, stdev=15134.05 00:19:09.423 lat (usec): min=501, max=41495, avg=34540.71, stdev=15134.30 00:19:09.423 clat percentiles (usec): 00:19:09.423 | 1.00th=[ 474], 5.00th=[ 482], 10.00th=[ 578], 20.00th=[41157], 00:19:09.423 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:09.423 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:09.423 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:09.423 | 99.99th=[41681] 00:19:09.423 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:19:09.423 slat (nsec): min=6154, max=57551, avg=20646.07, stdev=10413.14 00:19:09.423 clat (usec): min=220, max=627, avg=302.09, stdev=63.67 00:19:09.423 lat (usec): min=229, max=660, avg=322.74, stdev=64.66 00:19:09.423 clat percentiles (usec): 00:19:09.423 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 245], 20.00th=[ 253], 00:19:09.423 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:19:09.423 | 70.00th=[ 314], 80.00th=[ 363], 90.00th=[ 400], 95.00th=[ 424], 00:19:09.423 | 99.00th=[ 482], 99.50th=[ 545], 99.90th=[ 627], 99.95th=[ 627], 00:19:09.423 | 99.99th=[ 627] 00:19:09.423 bw ( KiB/s): min= 4096, max= 4096, per=51.60%, avg=4096.00, stdev= 0.00, samples=1 00:19:09.423 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:09.423 lat (usec) : 250=15.08%, 500=79.70%, 750=1.30% 00:19:09.423 lat (msec) : 50=3.91% 00:19:09.423 cpu : usr=0.58%, sys=1.26%, ctx=537, majf=0, minf=1 00:19:09.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.423 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:09.423 00:19:09.423 Run status group 0 (all jobs): 00:19:09.423 READ: bw=1081KiB/s (1107kB/s), 81.6KiB/s-847KiB/s (83.5kB/s-867kB/s), io=1116KiB (1143kB), run=1001-1032msec 00:19:09.423 WRITE: bw=7938KiB/s (8128kB/s), 1984KiB/s-2046KiB/s (2032kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1032msec 00:19:09.423 00:19:09.423 Disk stats (read/write): 00:19:09.423 nvme0n1: ios=47/512, merge=0/0, ticks=1707/144, in_queue=1851, util=97.49% 00:19:09.423 nvme0n2: ios=45/512, merge=0/0, ticks=1013/140, in_queue=1153, util=99.29% 00:19:09.423 nvme0n3: ios=40/512, merge=0/0, ticks=1604/145, in_queue=1749, util=97.07% 00:19:09.423 nvme0n4: ios=20/512, merge=0/0, ticks=659/146, in_queue=805, util=89.66% 00:19:09.423 02:06:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:09.423 [global] 00:19:09.423 thread=1 00:19:09.424 invalidate=1 00:19:09.424 rw=write 00:19:09.424 time_based=1 00:19:09.424 runtime=1 00:19:09.424 ioengine=libaio 00:19:09.424 direct=1 00:19:09.424 bs=4096 00:19:09.424 iodepth=128 00:19:09.424 norandommap=0 00:19:09.424 numjobs=1 00:19:09.424 00:19:09.424 verify_dump=1 00:19:09.424 verify_backlog=512 00:19:09.424 verify_state_save=0 00:19:09.424 do_verify=1 00:19:09.424 verify=crc32c-intel 00:19:09.424 [job0] 00:19:09.424 filename=/dev/nvme0n1 00:19:09.424 [job1] 00:19:09.424 filename=/dev/nvme0n2 00:19:09.424 [job2] 00:19:09.424 filename=/dev/nvme0n3 00:19:09.424 [job3] 00:19:09.424 filename=/dev/nvme0n4 00:19:09.424 Could not set queue depth (nvme0n1) 00:19:09.424 Could not set queue depth (nvme0n2) 00:19:09.424 Could not set queue depth (nvme0n3) 00:19:09.424 Could not set queue depth (nvme0n4) 00:19:09.424 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.424 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.424 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.424 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.424 fio-3.35 00:19:09.424 Starting 4 threads 00:19:10.803 00:19:10.803 job0: (groupid=0, jobs=1): err= 0: pid=1591192: Sun Jul 14 02:06:16 2024 00:19:10.803 read: IOPS=2120, BW=8481KiB/s (8685kB/s)(8532KiB/1006msec) 00:19:10.803 slat (usec): min=3, max=25943, avg=142.20, stdev=998.56 00:19:10.803 clat (usec): min=1381, max=53055, avg=17198.58, stdev=8317.81 00:19:10.803 lat (usec): min=7468, max=53064, avg=17340.78, stdev=8376.58 00:19:10.803 clat percentiles (usec): 00:19:10.803 | 1.00th=[11207], 5.00th=[12387], 10.00th=[12780], 20.00th=[13566], 00:19:10.803 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:19:10.803 | 70.00th=[15533], 80.00th=[17171], 90.00th=[23462], 95.00th=[45351], 00:19:10.803 | 99.00th=[52691], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:19:10.803 | 99.99th=[53216] 00:19:10.803 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:19:10.803 slat (usec): min=3, max=42059, avg=263.33, stdev=2164.56 00:19:10.803 clat (msec): min=10, max=231, avg=28.02, stdev=19.63 00:19:10.803 lat (msec): min=12, max=231, avg=28.29, stdev=20.05 00:19:10.803 clat percentiles (msec): 00:19:10.803 | 1.00th=[ 15], 5.00th=[ 17], 10.00th=[ 18], 20.00th=[ 19], 00:19:10.803 | 30.00th=[ 20], 40.00th=[ 21], 50.00th=[ 22], 60.00th=[ 24], 00:19:10.803 | 70.00th=[ 27], 80.00th=[ 32], 90.00th=[ 42], 95.00th=[ 60], 00:19:10.803 | 99.00th=[ 125], 99.50th=[ 167], 99.90th=[ 232], 99.95th=[ 232], 00:19:10.803 | 99.99th=[ 232] 00:19:10.803 bw ( KiB/s): min= 8175, max=11944, per=16.67%, avg=10059.50, stdev=2665.09, samples=2 00:19:10.803 iops : min= 2043, max= 2986, avg=2514.50, stdev=666.80, samples=2 00:19:10.803 lat (msec) : 2=0.02%, 10=0.40%, 20=57.17%, 50=37.80%, 100=3.92% 00:19:10.803 lat (msec) : 250=0.68% 00:19:10.803 cpu : usr=4.18%, sys=4.68%, ctx=209, majf=0, minf=1 00:19:10.803 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:10.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.803 issued rwts: total=2133,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.803 job1: (groupid=0, jobs=1): err= 0: pid=1591193: Sun Jul 14 02:06:16 2024 00:19:10.803 read: IOPS=4064, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1006msec) 00:19:10.803 slat (usec): min=2, max=10225, avg=99.13, stdev=636.27 00:19:10.803 clat (usec): min=2603, max=30582, avg=13013.95, stdev=3356.47 00:19:10.803 lat (usec): min=4201, max=30601, avg=13113.08, stdev=3399.72 00:19:10.803 clat percentiles (usec): 00:19:10.803 | 1.00th=[ 5800], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[10552], 00:19:10.803 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12518], 60.00th=[13435], 00:19:10.803 | 70.00th=[14222], 80.00th=[14746], 90.00th=[17433], 95.00th=[20579], 00:19:10.803 | 99.00th=[23200], 99.50th=[25297], 99.90th=[25560], 99.95th=[25560], 00:19:10.803 | 99.99th=[30540] 00:19:10.803 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:19:10.803 slat (usec): min=3, max=26637, avg=130.16, stdev=1016.45 00:19:10.803 clat (usec): min=870, max=94691, avg=17030.01, stdev=13805.19 00:19:10.803 lat (usec): min=878, max=94716, avg=17160.16, stdev=13907.64 00:19:10.803 clat percentiles (usec): 00:19:10.803 | 1.00th=[ 2442], 5.00th=[ 8029], 10.00th=[ 8356], 20.00th=[10159], 00:19:10.803 | 30.00th=[10814], 40.00th=[11600], 50.00th=[12649], 60.00th=[14091], 00:19:10.803 | 70.00th=[16450], 80.00th=[21365], 90.00th=[31851], 95.00th=[39060], 00:19:10.803 | 99.00th=[89654], 99.50th=[94897], 99.90th=[94897], 99.95th=[94897], 00:19:10.803 | 99.99th=[94897] 00:19:10.803 bw ( KiB/s): min=12263, max=20480, per=27.13%, avg=16371.50, stdev=5810.30, samples=2 00:19:10.803 iops : min= 3065, max= 5120, avg=4092.50, stdev=1453.10, samples=2 00:19:10.803 lat (usec) : 1000=0.04% 00:19:10.803 lat (msec) : 2=0.33%, 4=0.43%, 10=13.78%, 20=70.79%, 50=13.08% 00:19:10.803 lat (msec) : 100=1.55% 00:19:10.803 cpu : usr=4.78%, sys=5.27%, ctx=380, majf=0, minf=1 00:19:10.803 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:10.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.803 issued rwts: total=4089,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.803 job2: (groupid=0, jobs=1): err= 0: pid=1591194: Sun Jul 14 02:06:16 2024 00:19:10.803 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:19:10.803 slat (usec): min=2, max=18462, avg=105.64, stdev=786.59 00:19:10.803 clat (usec): min=4676, max=39701, avg=13940.64, stdev=4090.48 00:19:10.803 lat (usec): min=5339, max=39755, avg=14046.29, stdev=4136.74 00:19:10.803 clat percentiles (usec): 00:19:10.803 | 1.00th=[ 7832], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10683], 00:19:10.803 | 30.00th=[11076], 40.00th=[11469], 50.00th=[12518], 60.00th=[14091], 00:19:10.803 | 70.00th=[15926], 80.00th=[17171], 90.00th=[20841], 95.00th=[21365], 00:19:10.803 | 99.00th=[23987], 99.50th=[25560], 99.90th=[26084], 99.95th=[28705], 00:19:10.803 | 99.99th=[39584] 00:19:10.803 write: IOPS=4955, BW=19.4MiB/s (20.3MB/s)(19.6MiB/1011msec); 0 zone resets 00:19:10.803 slat (usec): min=4, max=18972, avg=91.31, stdev=535.90 00:19:10.803 clat (usec): min=3644, max=34914, avg=12737.25, stdev=5304.24 00:19:10.803 lat (usec): min=3662, max=34938, avg=12828.56, stdev=5334.82 00:19:10.803 clat percentiles (usec): 00:19:10.803 | 1.00th=[ 4359], 5.00th=[ 5800], 10.00th=[ 6980], 20.00th=[ 9634], 00:19:10.803 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:19:10.803 | 70.00th=[13173], 80.00th=[13960], 90.00th=[20579], 95.00th=[23200], 00:19:10.803 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:19:10.803 | 99.99th=[34866] 00:19:10.803 bw ( KiB/s): min=18584, max=20480, per=32.37%, avg=19532.00, stdev=1340.67, samples=2 00:19:10.803 iops : min= 4646, max= 5120, avg=4883.00, stdev=335.17, samples=2 00:19:10.804 lat (msec) : 4=0.25%, 10=17.30%, 20=71.65%, 50=10.80% 00:19:10.804 cpu : usr=7.52%, sys=10.79%, ctx=578, majf=0, minf=1 00:19:10.804 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:10.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.804 issued rwts: total=4608,5010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.804 job3: (groupid=0, jobs=1): err= 0: pid=1591195: Sun Jul 14 02:06:16 2024 00:19:10.804 read: IOPS=3073, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1006msec) 00:19:10.804 slat (usec): min=2, max=21508, avg=146.33, stdev=1173.49 00:19:10.804 clat (usec): min=3819, max=56628, avg=18975.24, stdev=8074.63 00:19:10.804 lat (usec): min=3831, max=56664, avg=19121.56, stdev=8154.16 00:19:10.804 clat percentiles (usec): 00:19:10.804 | 1.00th=[ 5014], 5.00th=[10945], 10.00th=[11863], 20.00th=[12649], 00:19:10.804 | 30.00th=[13960], 40.00th=[14615], 50.00th=[15664], 60.00th=[17695], 00:19:10.804 | 70.00th=[21365], 80.00th=[25035], 90.00th=[31851], 95.00th=[36439], 00:19:10.804 | 99.00th=[38536], 99.50th=[38536], 99.90th=[41681], 99.95th=[55837], 00:19:10.804 | 99.99th=[56886] 00:19:10.804 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:19:10.804 slat (usec): min=3, max=16932, avg=128.73, stdev=834.95 00:19:10.804 clat (usec): min=906, max=41450, avg=19265.03, stdev=8655.40 00:19:10.804 lat (usec): min=928, max=41461, avg=19393.77, stdev=8728.36 00:19:10.804 clat percentiles (usec): 00:19:10.804 | 1.00th=[ 4817], 5.00th=[ 7701], 10.00th=[ 9765], 20.00th=[11338], 00:19:10.804 | 30.00th=[13304], 40.00th=[15008], 50.00th=[18482], 60.00th=[20841], 00:19:10.804 | 70.00th=[22938], 80.00th=[26870], 90.00th=[32375], 95.00th=[36439], 00:19:10.804 | 99.00th=[40109], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:19:10.804 | 99.99th=[41681] 00:19:10.804 bw ( KiB/s): min=13240, max=14576, per=23.05%, avg=13908.00, stdev=944.69, samples=2 00:19:10.804 iops : min= 3310, max= 3644, avg=3477.00, stdev=236.17, samples=2 00:19:10.804 lat (usec) : 1000=0.03% 00:19:10.804 lat (msec) : 4=0.28%, 10=7.29%, 20=52.91%, 50=39.44%, 100=0.04% 00:19:10.804 cpu : usr=3.18%, sys=4.58%, ctx=345, majf=0, minf=1 00:19:10.804 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:10.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.804 issued rwts: total=3092,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.804 00:19:10.804 Run status group 0 (all jobs): 00:19:10.804 READ: bw=53.8MiB/s (56.4MB/s), 8481KiB/s-17.8MiB/s (8685kB/s-18.7MB/s), io=54.4MiB (57.0MB), run=1006-1011msec 00:19:10.804 WRITE: bw=58.9MiB/s (61.8MB/s), 9.94MiB/s-19.4MiB/s (10.4MB/s-20.3MB/s), io=59.6MiB (62.5MB), run=1006-1011msec 00:19:10.804 00:19:10.804 Disk stats (read/write): 00:19:10.804 nvme0n1: ios=2089/2079, merge=0/0, ticks=11750/13608, in_queue=25358, util=98.50% 00:19:10.804 nvme0n2: ios=3119/3359, merge=0/0, ticks=24207/29128, in_queue=53335, util=99.80% 00:19:10.804 nvme0n3: ios=4012/4096, merge=0/0, ticks=53653/49330, in_queue=102983, util=98.12% 00:19:10.804 nvme0n4: ios=2560/2953, merge=0/0, ticks=32709/35127, in_queue=67836, util=89.70% 00:19:10.804 02:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:10.804 [global] 00:19:10.804 thread=1 00:19:10.804 invalidate=1 00:19:10.804 rw=randwrite 00:19:10.804 time_based=1 00:19:10.804 runtime=1 00:19:10.804 ioengine=libaio 00:19:10.804 direct=1 00:19:10.804 bs=4096 00:19:10.804 iodepth=128 00:19:10.804 norandommap=0 00:19:10.804 numjobs=1 00:19:10.804 00:19:10.804 verify_dump=1 00:19:10.804 verify_backlog=512 00:19:10.804 verify_state_save=0 00:19:10.804 do_verify=1 00:19:10.804 verify=crc32c-intel 00:19:10.804 [job0] 00:19:10.804 filename=/dev/nvme0n1 00:19:10.804 [job1] 00:19:10.804 filename=/dev/nvme0n2 00:19:10.804 [job2] 00:19:10.804 filename=/dev/nvme0n3 00:19:10.804 [job3] 00:19:10.804 filename=/dev/nvme0n4 00:19:10.804 Could not set queue depth (nvme0n1) 00:19:10.804 Could not set queue depth (nvme0n2) 00:19:10.804 Could not set queue depth (nvme0n3) 00:19:10.804 Could not set queue depth (nvme0n4) 00:19:10.804 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:10.804 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:10.804 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:10.804 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:10.804 fio-3.35 00:19:10.804 Starting 4 threads 00:19:12.182 00:19:12.182 job0: (groupid=0, jobs=1): err= 0: pid=1591421: Sun Jul 14 02:06:17 2024 00:19:12.182 read: IOPS=2719, BW=10.6MiB/s (11.1MB/s)(11.2MiB/1055msec) 00:19:12.182 slat (usec): min=3, max=22565, avg=135.61, stdev=970.91 00:19:12.182 clat (usec): min=8770, max=69968, avg=19325.77, stdev=11222.63 00:19:12.182 lat (usec): min=8790, max=69978, avg=19461.38, stdev=11256.40 00:19:12.182 clat percentiles (usec): 00:19:12.182 | 1.00th=[11731], 5.00th=[12780], 10.00th=[13042], 20.00th=[13173], 00:19:12.182 | 30.00th=[13435], 40.00th=[14353], 50.00th=[14877], 60.00th=[15533], 00:19:12.182 | 70.00th=[18744], 80.00th=[23200], 90.00th=[28967], 95.00th=[37487], 00:19:12.182 | 99.00th=[69731], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:19:12.182 | 99.99th=[69731] 00:19:12.182 write: IOPS=2911, BW=11.4MiB/s (11.9MB/s)(12.0MiB/1055msec); 0 zone resets 00:19:12.182 slat (usec): min=4, max=18560, avg=188.64, stdev=1060.42 00:19:12.182 clat (usec): min=6618, max=95870, avg=25469.19, stdev=17948.35 00:19:12.182 lat (usec): min=6655, max=95881, avg=25657.83, stdev=18064.23 00:19:12.182 clat percentiles (usec): 00:19:12.182 | 1.00th=[ 9241], 5.00th=[10552], 10.00th=[11731], 20.00th=[13173], 00:19:12.182 | 30.00th=[13829], 40.00th=[17433], 50.00th=[21103], 60.00th=[23200], 00:19:12.182 | 70.00th=[23725], 80.00th=[31327], 90.00th=[53216], 95.00th=[59507], 00:19:12.182 | 99.00th=[90702], 99.50th=[94897], 99.90th=[95945], 99.95th=[95945], 00:19:12.182 | 99.99th=[95945] 00:19:12.182 bw ( KiB/s): min=10832, max=13744, per=24.10%, avg=12288.00, stdev=2059.09, samples=2 00:19:12.182 iops : min= 2708, max= 3436, avg=3072.00, stdev=514.77, samples=2 00:19:12.182 lat (msec) : 10=2.12%, 20=58.63%, 50=31.17%, 100=8.08% 00:19:12.182 cpu : usr=5.03%, sys=5.69%, ctx=294, majf=0, minf=1 00:19:12.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:12.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:12.182 issued rwts: total=2869,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:12.182 job1: (groupid=0, jobs=1): err= 0: pid=1591422: Sun Jul 14 02:06:17 2024 00:19:12.182 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:19:12.182 slat (usec): min=2, max=13249, avg=111.04, stdev=742.16 00:19:12.182 clat (usec): min=5766, max=42935, avg=13893.04, stdev=5437.17 00:19:12.182 lat (usec): min=5772, max=42948, avg=14004.08, stdev=5493.14 00:19:12.182 clat percentiles (usec): 00:19:12.182 | 1.00th=[ 7046], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9634], 00:19:12.182 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12780], 60.00th=[13566], 00:19:12.182 | 70.00th=[14484], 80.00th=[17171], 90.00th=[18744], 95.00th=[24249], 00:19:12.182 | 99.00th=[38536], 99.50th=[39060], 99.90th=[42730], 99.95th=[42730], 00:19:12.182 | 99.99th=[42730] 00:19:12.182 write: IOPS=3926, BW=15.3MiB/s (16.1MB/s)(15.5MiB/1012msec); 0 zone resets 00:19:12.182 slat (usec): min=3, max=21035, avg=136.94, stdev=719.78 00:19:12.182 clat (usec): min=1386, max=49842, avg=19763.95, stdev=11342.78 00:19:12.182 lat (usec): min=1401, max=49861, avg=19900.89, stdev=11414.23 00:19:12.182 clat percentiles (usec): 00:19:12.182 | 1.00th=[ 2671], 5.00th=[ 5604], 10.00th=[ 6980], 20.00th=[ 9110], 00:19:12.182 | 30.00th=[12518], 40.00th=[14484], 50.00th=[18744], 60.00th=[20317], 00:19:12.182 | 70.00th=[22152], 80.00th=[30802], 90.00th=[38536], 95.00th=[43254], 00:19:12.182 | 99.00th=[48497], 99.50th=[48497], 99.90th=[49546], 99.95th=[50070], 00:19:12.182 | 99.99th=[50070] 00:19:12.182 bw ( KiB/s): min=13312, max=17456, per=30.17%, avg=15384.00, stdev=2930.25, samples=2 00:19:12.182 iops : min= 3328, max= 4364, avg=3846.00, stdev=732.56, samples=2 00:19:12.182 lat (msec) : 2=0.13%, 4=0.61%, 10=20.35%, 20=53.89%, 50=25.02% 00:19:12.182 cpu : usr=5.24%, sys=8.31%, ctx=378, majf=0, minf=1 00:19:12.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:12.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:12.182 issued rwts: total=3584,3974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:12.182 job2: (groupid=0, jobs=1): err= 0: pid=1591423: Sun Jul 14 02:06:17 2024 00:19:12.182 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:19:12.182 slat (usec): min=3, max=28403, avg=135.91, stdev=1126.51 00:19:12.182 clat (usec): min=2785, max=61276, avg=17007.38, stdev=9073.24 00:19:12.182 lat (usec): min=2791, max=67257, avg=17143.30, stdev=9194.67 00:19:12.182 clat percentiles (usec): 00:19:12.182 | 1.00th=[ 4555], 5.00th=[ 6652], 10.00th=[10290], 20.00th=[11207], 00:19:12.182 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12780], 60.00th=[14615], 00:19:12.182 | 70.00th=[19792], 80.00th=[25035], 90.00th=[28967], 95.00th=[40109], 00:19:12.182 | 99.00th=[42730], 99.50th=[42730], 99.90th=[52691], 99.95th=[54264], 00:19:12.182 | 99.99th=[61080] 00:19:12.182 write: IOPS=2948, BW=11.5MiB/s (12.1MB/s)(11.6MiB/1011msec); 0 zone resets 00:19:12.182 slat (usec): min=3, max=37789, avg=196.22, stdev=1293.88 00:19:12.182 clat (usec): min=579, max=140309, avg=28396.19, stdev=26819.70 00:19:12.182 lat (usec): min=612, max=140320, avg=28592.41, stdev=26971.12 00:19:12.182 clat percentiles (usec): 00:19:12.182 | 1.00th=[ 1434], 5.00th=[ 3097], 10.00th=[ 4686], 20.00th=[ 10028], 00:19:12.182 | 30.00th=[ 12649], 40.00th=[ 18744], 50.00th=[ 22152], 60.00th=[ 23462], 00:19:12.182 | 70.00th=[ 29492], 80.00th=[ 40109], 90.00th=[ 65799], 95.00th=[ 78119], 00:19:12.182 | 99.00th=[132645], 99.50th=[137364], 99.90th=[139461], 99.95th=[139461], 00:19:12.182 | 99.99th=[139461] 00:19:12.182 bw ( KiB/s): min=11248, max=11584, per=22.39%, avg=11416.00, stdev=237.59, samples=2 00:19:12.182 iops : min= 2812, max= 2896, avg=2854.00, stdev=59.40, samples=2 00:19:12.182 lat (usec) : 750=0.02%, 1000=0.05% 00:19:12.182 lat (msec) : 2=1.01%, 4=3.59%, 10=10.05%, 20=39.96%, 50=37.66% 00:19:12.182 lat (msec) : 100=5.50%, 250=2.15% 00:19:12.182 cpu : usr=4.55%, sys=5.94%, ctx=298, majf=0, minf=1 00:19:12.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:12.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:12.183 issued rwts: total=2560,2981,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:12.183 job3: (groupid=0, jobs=1): err= 0: pid=1591424: Sun Jul 14 02:06:17 2024 00:19:12.183 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:19:12.183 slat (usec): min=2, max=13298, avg=162.38, stdev=922.42 00:19:12.183 clat (usec): min=9252, max=66976, avg=18835.69, stdev=9109.23 00:19:12.183 lat (usec): min=9255, max=66989, avg=18998.07, stdev=9220.75 00:19:12.183 clat percentiles (usec): 00:19:12.183 | 1.00th=[10421], 5.00th=[12256], 10.00th=[12780], 20.00th=[13304], 00:19:12.183 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14484], 60.00th=[15664], 00:19:12.183 | 70.00th=[19268], 80.00th=[24773], 90.00th=[30802], 95.00th=[39060], 00:19:12.183 | 99.00th=[55837], 99.50th=[59507], 99.90th=[66847], 99.95th=[66847], 00:19:12.183 | 99.99th=[66847] 00:19:12.183 write: IOPS=3379, BW=13.2MiB/s (13.8MB/s)(13.4MiB/1013msec); 0 zone resets 00:19:12.183 slat (usec): min=3, max=17311, avg=141.02, stdev=829.57 00:19:12.183 clat (usec): min=6807, max=66976, avg=20588.19, stdev=11260.11 00:19:12.183 lat (usec): min=6816, max=69690, avg=20729.21, stdev=11323.17 00:19:12.183 clat percentiles (usec): 00:19:12.183 | 1.00th=[ 8029], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[11994], 00:19:12.183 | 30.00th=[12387], 40.00th=[13304], 50.00th=[18744], 60.00th=[20055], 00:19:12.183 | 70.00th=[22152], 80.00th=[27132], 90.00th=[35390], 95.00th=[44827], 00:19:12.183 | 99.00th=[60031], 99.50th=[61080], 99.90th=[63177], 99.95th=[63177], 00:19:12.183 | 99.99th=[66847] 00:19:12.183 bw ( KiB/s): min=12312, max=14072, per=25.87%, avg=13192.00, stdev=1244.51, samples=2 00:19:12.183 iops : min= 3078, max= 3518, avg=3298.00, stdev=311.13, samples=2 00:19:12.183 lat (msec) : 10=3.46%, 20=63.17%, 50=30.89%, 100=2.48% 00:19:12.183 cpu : usr=2.27%, sys=3.75%, ctx=381, majf=0, minf=1 00:19:12.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:12.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:12.183 issued rwts: total=3072,3423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:12.183 00:19:12.183 Run status group 0 (all jobs): 00:19:12.183 READ: bw=44.7MiB/s (46.9MB/s), 9.89MiB/s-13.8MiB/s (10.4MB/s-14.5MB/s), io=47.2MiB (49.5MB), run=1011-1055msec 00:19:12.183 WRITE: bw=49.8MiB/s (52.2MB/s), 11.4MiB/s-15.3MiB/s (11.9MB/s-16.1MB/s), io=52.5MiB (55.1MB), run=1011-1055msec 00:19:12.183 00:19:12.183 Disk stats (read/write): 00:19:12.183 nvme0n1: ios=2599/2607, merge=0/0, ticks=42949/61854, in_queue=104803, util=99.10% 00:19:12.183 nvme0n2: ios=3112/3342, merge=0/0, ticks=40815/61390, in_queue=102205, util=98.88% 00:19:12.183 nvme0n3: ios=2073/2383, merge=0/0, ticks=35843/67354, in_queue=103197, util=98.02% 00:19:12.183 nvme0n4: ios=2560/2860, merge=0/0, ticks=26088/26990, in_queue=53078, util=89.70% 00:19:12.183 02:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:12.183 02:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1591562 00:19:12.183 02:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:12.183 02:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:12.183 [global] 00:19:12.183 thread=1 00:19:12.183 invalidate=1 00:19:12.183 rw=read 00:19:12.183 time_based=1 00:19:12.183 runtime=10 00:19:12.183 ioengine=libaio 00:19:12.183 direct=1 00:19:12.183 bs=4096 00:19:12.183 iodepth=1 00:19:12.183 norandommap=1 00:19:12.183 numjobs=1 00:19:12.183 00:19:12.183 [job0] 00:19:12.183 filename=/dev/nvme0n1 00:19:12.183 [job1] 00:19:12.183 filename=/dev/nvme0n2 00:19:12.183 [job2] 00:19:12.183 filename=/dev/nvme0n3 00:19:12.183 [job3] 00:19:12.183 filename=/dev/nvme0n4 00:19:12.183 Could not set queue depth (nvme0n1) 00:19:12.183 Could not set queue depth (nvme0n2) 00:19:12.183 Could not set queue depth (nvme0n3) 00:19:12.183 Could not set queue depth (nvme0n4) 00:19:12.440 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:12.440 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:12.440 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:12.440 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:12.440 fio-3.35 00:19:12.440 Starting 4 threads 00:19:15.726 02:06:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:15.726 02:06:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:15.726 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=12656640, buflen=4096 00:19:15.726 fio: pid=1591657, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:15.726 02:06:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.726 02:06:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:15.726 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=27967488, buflen=4096 00:19:15.726 fio: pid=1591656, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:15.983 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=4972544, buflen=4096 00:19:15.983 fio: pid=1591654, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:15.983 02:06:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.983 02:06:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:16.241 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=29745152, buflen=4096 00:19:16.241 fio: pid=1591655, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:16.241 02:06:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:16.241 02:06:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:16.241 00:19:16.241 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1591654: Sun Jul 14 02:06:21 2024 00:19:16.241 read: IOPS=352, BW=1407KiB/s (1441kB/s)(4856KiB/3451msec) 00:19:16.241 slat (usec): min=5, max=26885, avg=49.37, stdev=894.19 00:19:16.241 clat (usec): min=344, max=42509, avg=2771.97, stdev=9420.28 00:19:16.241 lat (usec): min=351, max=69011, avg=2821.37, stdev=9552.64 00:19:16.241 clat percentiles (usec): 00:19:16.241 | 1.00th=[ 392], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 416], 00:19:16.241 | 30.00th=[ 424], 40.00th=[ 433], 50.00th=[ 445], 60.00th=[ 453], 00:19:16.241 | 70.00th=[ 465], 80.00th=[ 478], 90.00th=[ 766], 95.00th=[41157], 00:19:16.241 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:19:16.241 | 99.99th=[42730] 00:19:16.241 bw ( KiB/s): min= 96, max= 5456, per=5.55%, avg=1093.33, stdev=2150.44, samples=6 00:19:16.241 iops : min= 24, max= 1364, avg=273.33, stdev=537.61, samples=6 00:19:16.241 lat (usec) : 500=85.93%, 750=1.98%, 1000=6.09% 00:19:16.241 lat (msec) : 2=0.16%, 4=0.08%, 20=0.08%, 50=5.60% 00:19:16.241 cpu : usr=0.26%, sys=0.41%, ctx=1219, majf=0, minf=1 00:19:16.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.241 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.241 issued rwts: total=1215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.241 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1591655: Sun Jul 14 02:06:21 2024 00:19:16.241 read: IOPS=1942, BW=7769KiB/s (7955kB/s)(28.4MiB/3739msec) 00:19:16.241 slat (usec): min=5, max=30890, avg=22.89, stdev=415.74 00:19:16.241 clat (usec): min=297, max=5397, avg=484.39, stdev=163.43 00:19:16.241 lat (usec): min=307, max=31714, avg=507.28, stdev=452.17 00:19:16.241 clat percentiles (usec): 00:19:16.241 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 347], 00:19:16.241 | 30.00th=[ 392], 40.00th=[ 433], 50.00th=[ 465], 60.00th=[ 494], 00:19:16.241 | 70.00th=[ 519], 80.00th=[ 578], 90.00th=[ 668], 95.00th=[ 791], 00:19:16.241 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 1045], 99.95th=[ 1958], 00:19:16.241 | 99.99th=[ 5407] 00:19:16.241 bw ( KiB/s): min= 6856, max= 9176, per=39.55%, avg=7783.71, stdev=1000.61, samples=7 00:19:16.241 iops : min= 1714, max= 2294, avg=1945.86, stdev=250.05, samples=7 00:19:16.241 lat (usec) : 500=62.67%, 750=30.83%, 1000=6.28% 00:19:16.241 lat (msec) : 2=0.17%, 4=0.01%, 10=0.03% 00:19:16.241 cpu : usr=1.77%, sys=3.96%, ctx=7271, majf=0, minf=1 00:19:16.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.241 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.241 issued rwts: total=7263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.241 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1591656: Sun Jul 14 02:06:21 2024 00:19:16.241 read: IOPS=2143, BW=8573KiB/s (8778kB/s)(26.7MiB/3186msec) 00:19:16.241 slat (usec): min=4, max=14999, avg=20.92, stdev=222.13 00:19:16.241 clat (usec): min=308, max=3017, avg=438.19, stdev=78.36 00:19:16.241 lat (usec): min=315, max=15445, avg=459.11, stdev=236.05 00:19:16.241 clat percentiles (usec): 00:19:16.241 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 355], 20.00th=[ 371], 00:19:16.241 | 30.00th=[ 388], 40.00th=[ 412], 50.00th=[ 437], 60.00th=[ 457], 00:19:16.241 | 70.00th=[ 478], 80.00th=[ 498], 90.00th=[ 519], 95.00th=[ 537], 00:19:16.241 | 99.00th=[ 709], 99.50th=[ 766], 99.90th=[ 816], 99.95th=[ 857], 00:19:16.241 | 99.99th=[ 3032] 00:19:16.241 bw ( KiB/s): min= 7584, max=10296, per=44.16%, avg=8690.67, stdev=1011.48, samples=6 00:19:16.241 iops : min= 1896, max= 2574, avg=2172.67, stdev=252.87, samples=6 00:19:16.241 lat (usec) : 500=81.87%, 750=17.34%, 1000=0.75% 00:19:16.241 lat (msec) : 2=0.01%, 4=0.01% 00:19:16.241 cpu : usr=1.88%, sys=4.55%, ctx=6832, majf=0, minf=1 00:19:16.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.241 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.241 issued rwts: total=6829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.241 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1591657: Sun Jul 14 02:06:21 2024 00:19:16.241 read: IOPS=1051, BW=4203KiB/s (4304kB/s)(12.1MiB/2941msec) 00:19:16.241 slat (nsec): min=5913, max=56261, avg=13997.92, stdev=5960.26 00:19:16.241 clat (usec): min=318, max=42069, avg=926.85, stdev=4187.03 00:19:16.241 lat (usec): min=335, max=42100, avg=940.85, stdev=4187.34 00:19:16.241 clat percentiles (usec): 00:19:16.241 | 1.00th=[ 355], 5.00th=[ 392], 10.00th=[ 424], 20.00th=[ 449], 00:19:16.241 | 30.00th=[ 461], 40.00th=[ 474], 50.00th=[ 486], 60.00th=[ 494], 00:19:16.241 | 70.00th=[ 502], 80.00th=[ 510], 90.00th=[ 537], 95.00th=[ 750], 00:19:16.241 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:16.241 | 99.99th=[42206] 00:19:16.242 bw ( KiB/s): min= 96, max= 7640, per=25.03%, avg=4926.40, stdev=3013.56, samples=5 00:19:16.242 iops : min= 24, max= 1910, avg=1231.60, stdev=753.39, samples=5 00:19:16.242 lat (usec) : 500=68.00%, 750=26.69%, 1000=3.98% 00:19:16.242 lat (msec) : 2=0.23%, 50=1.07% 00:19:16.242 cpu : usr=0.88%, sys=2.31%, ctx=3091, majf=0, minf=1 00:19:16.242 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.242 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.242 issued rwts: total=3091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.242 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.242 00:19:16.242 Run status group 0 (all jobs): 00:19:16.242 READ: bw=19.2MiB/s (20.1MB/s), 1407KiB/s-8573KiB/s (1441kB/s-8778kB/s), io=71.9MiB (75.3MB), run=2941-3739msec 00:19:16.242 00:19:16.242 Disk stats (read/write): 00:19:16.242 nvme0n1: ios=1137/0, merge=0/0, ticks=3896/0, in_queue=3896, util=98.63% 00:19:16.242 nvme0n2: ios=7035/0, merge=0/0, ticks=3309/0, in_queue=3309, util=94.83% 00:19:16.242 nvme0n3: ios=6754/0, merge=0/0, ticks=3154/0, in_queue=3154, util=98.63% 00:19:16.242 nvme0n4: ios=3088/0, merge=0/0, ticks=2750/0, in_queue=2750, util=96.75% 00:19:16.499 02:06:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:16.499 02:06:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:16.756 02:06:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:16.756 02:06:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:17.013 02:06:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:17.013 02:06:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:17.271 02:06:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:17.271 02:06:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:17.529 02:06:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:17.529 02:06:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1591562 00:19:17.529 02:06:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:17.529 02:06:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:17.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:17.788 02:06:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:17.788 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:17.788 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:17.788 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:17.788 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:17.788 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:17.788 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:17.788 02:06:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:17.788 02:06:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:17.788 nvmf hotplug test: fio failed as expected 00:19:17.788 02:06:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:18.046 rmmod nvme_tcp 00:19:18.046 rmmod nvme_fabrics 00:19:18.046 rmmod nvme_keyring 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1589130 ']' 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1589130 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1589130 ']' 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1589130 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1589130 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1589130' 00:19:18.046 killing process with pid 1589130 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1589130 00:19:18.046 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1589130 00:19:18.305 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:18.305 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:18.305 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:18.305 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:18.305 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:18.305 02:06:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.305 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.305 02:06:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.216 02:06:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:20.216 00:19:20.216 real 0m23.221s 00:19:20.216 user 1m19.085s 00:19:20.216 sys 0m7.644s 00:19:20.216 02:06:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:20.216 02:06:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.216 ************************************ 00:19:20.216 END TEST nvmf_fio_target 00:19:20.216 ************************************ 00:19:20.216 02:06:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:20.216 02:06:25 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:20.216 02:06:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:20.216 02:06:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:20.216 02:06:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:20.476 ************************************ 00:19:20.476 START TEST nvmf_bdevio 00:19:20.476 ************************************ 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:20.476 * Looking for test storage... 00:19:20.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.476 02:06:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:20.477 02:06:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:22.442 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.442 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:22.443 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:22.443 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:22.443 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:22.443 02:06:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:22.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:19:22.443 00:19:22.443 --- 10.0.0.2 ping statistics --- 00:19:22.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.443 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:22.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:19:22.443 00:19:22.443 --- 10.0.0.1 ping statistics --- 00:19:22.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.443 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1594272 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1594272 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1594272 ']' 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.443 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:22.443 [2024-07-14 02:06:28.091604] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:22.443 [2024-07-14 02:06:28.091676] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.443 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.701 [2024-07-14 02:06:28.156494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:22.701 [2024-07-14 02:06:28.242599] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.701 [2024-07-14 02:06:28.242667] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.701 [2024-07-14 02:06:28.242687] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.701 [2024-07-14 02:06:28.242714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.701 [2024-07-14 02:06:28.242724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.701 [2024-07-14 02:06:28.242817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:22.701 [2024-07-14 02:06:28.242860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:22.701 [2024-07-14 02:06:28.242902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:22.701 [2024-07-14 02:06:28.242906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:22.701 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.701 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:19:22.701 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:22.701 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:22.701 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:22.701 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.961 02:06:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:22.961 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.961 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:22.961 [2024-07-14 02:06:28.397889] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.961 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.961 02:06:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:22.961 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.961 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:22.961 Malloc0 00:19:22.961 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.961 02:06:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:22.961 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.961 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:22.961 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.961 02:06:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:22.961 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.962 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:22.962 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.962 02:06:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.962 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.962 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:22.962 [2024-07-14 02:06:28.451719] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.962 02:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.962 02:06:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:22.962 02:06:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:22.962 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:22.962 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:22.962 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:22.962 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:22.962 { 00:19:22.962 "params": { 00:19:22.962 "name": "Nvme$subsystem", 00:19:22.962 "trtype": "$TEST_TRANSPORT", 00:19:22.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.962 "adrfam": "ipv4", 00:19:22.962 "trsvcid": "$NVMF_PORT", 00:19:22.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.962 "hdgst": ${hdgst:-false}, 00:19:22.962 "ddgst": ${ddgst:-false} 00:19:22.962 }, 00:19:22.962 "method": "bdev_nvme_attach_controller" 00:19:22.962 } 00:19:22.962 EOF 00:19:22.962 )") 00:19:22.962 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:22.962 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:22.962 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:22.962 02:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:22.962 "params": { 00:19:22.962 "name": "Nvme1", 00:19:22.962 "trtype": "tcp", 00:19:22.962 "traddr": "10.0.0.2", 00:19:22.962 "adrfam": "ipv4", 00:19:22.962 "trsvcid": "4420", 00:19:22.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.962 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.962 "hdgst": false, 00:19:22.962 "ddgst": false 00:19:22.962 }, 00:19:22.962 "method": "bdev_nvme_attach_controller" 00:19:22.962 }' 00:19:22.962 [2024-07-14 02:06:28.498270] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:22.962 [2024-07-14 02:06:28.498354] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594412 ] 00:19:22.962 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.962 [2024-07-14 02:06:28.563613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:23.221 [2024-07-14 02:06:28.655845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.221 [2024-07-14 02:06:28.655903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.221 [2024-07-14 02:06:28.655908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.481 I/O targets: 00:19:23.481 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:23.481 00:19:23.481 00:19:23.481 CUnit - A unit testing framework for C - Version 2.1-3 00:19:23.481 http://cunit.sourceforge.net/ 00:19:23.481 00:19:23.481 00:19:23.481 Suite: bdevio tests on: Nvme1n1 00:19:23.481 Test: blockdev write read block ...passed 00:19:23.481 Test: blockdev write zeroes read block ...passed 00:19:23.481 Test: blockdev write zeroes read no split ...passed 00:19:23.481 Test: blockdev write zeroes read split ...passed 00:19:23.740 Test: blockdev write zeroes read split partial ...passed 00:19:23.740 Test: blockdev reset ...[2024-07-14 02:06:29.204796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:23.740 [2024-07-14 02:06:29.204919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183c90 (9): Bad file descriptor 00:19:23.740 [2024-07-14 02:06:29.314479] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:23.740 passed 00:19:23.740 Test: blockdev write read 8 blocks ...passed 00:19:23.740 Test: blockdev write read size > 128k ...passed 00:19:23.740 Test: blockdev write read invalid size ...passed 00:19:23.740 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:23.740 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:23.740 Test: blockdev write read max offset ...passed 00:19:23.999 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:23.999 Test: blockdev writev readv 8 blocks ...passed 00:19:23.999 Test: blockdev writev readv 30 x 1block ...passed 00:19:23.999 Test: blockdev writev readv block ...passed 00:19:23.999 Test: blockdev writev readv size > 128k ...passed 00:19:23.999 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:23.999 Test: blockdev comparev and writev ...[2024-07-14 02:06:29.490792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:23.999 [2024-07-14 02:06:29.490829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:23.999 [2024-07-14 02:06:29.490853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:23.999 [2024-07-14 02:06:29.490879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.999 [2024-07-14 02:06:29.491307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:23.999 [2024-07-14 02:06:29.491331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:23.999 [2024-07-14 02:06:29.491361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:23.999 [2024-07-14 02:06:29.491378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:23.999 [2024-07-14 02:06:29.491825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:23.999 [2024-07-14 02:06:29.491850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:23.999 [2024-07-14 02:06:29.491882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:23.999 [2024-07-14 02:06:29.491907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:23.999 [2024-07-14 02:06:29.492325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:23.999 [2024-07-14 02:06:29.492349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:23.999 [2024-07-14 02:06:29.492371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:23.999 [2024-07-14 02:06:29.492387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:23.999 passed 00:19:24.000 Test: blockdev nvme passthru rw ...passed 00:19:24.000 Test: blockdev nvme passthru vendor specific ...[2024-07-14 02:06:29.575271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:24.000 [2024-07-14 02:06:29.575298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:24.000 [2024-07-14 02:06:29.575500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:24.000 [2024-07-14 02:06:29.575523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:24.000 [2024-07-14 02:06:29.575722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:24.000 [2024-07-14 02:06:29.575743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:24.000 [2024-07-14 02:06:29.575945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:24.000 [2024-07-14 02:06:29.575969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:24.000 passed 00:19:24.000 Test: blockdev nvme admin passthru ...passed 00:19:24.000 Test: blockdev copy ...passed 00:19:24.000 00:19:24.000 Run Summary: Type Total Ran Passed Failed Inactive 00:19:24.000 suites 1 1 n/a 0 0 00:19:24.000 tests 23 23 23 0 0 00:19:24.000 asserts 152 152 152 0 n/a 00:19:24.000 00:19:24.000 Elapsed time = 1.282 seconds 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:24.259 rmmod nvme_tcp 00:19:24.259 rmmod nvme_fabrics 00:19:24.259 rmmod nvme_keyring 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1594272 ']' 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1594272 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1594272 ']' 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1594272 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1594272 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1594272' 00:19:24.259 killing process with pid 1594272 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1594272 00:19:24.259 02:06:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1594272 00:19:24.519 02:06:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:24.519 02:06:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:24.519 02:06:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:24.519 02:06:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:24.519 02:06:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:24.519 02:06:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.519 02:06:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:24.519 02:06:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.058 02:06:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:27.058 00:19:27.058 real 0m6.312s 00:19:27.058 user 0m10.910s 00:19:27.058 sys 0m2.008s 00:19:27.058 02:06:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:27.058 02:06:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:27.058 ************************************ 00:19:27.058 END TEST nvmf_bdevio 00:19:27.058 ************************************ 00:19:27.058 02:06:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:27.058 02:06:32 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:27.058 02:06:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:27.058 02:06:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.058 02:06:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:27.058 ************************************ 00:19:27.058 START TEST nvmf_auth_target 00:19:27.058 ************************************ 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:27.058 * Looking for test storage... 00:19:27.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.058 02:06:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:27.059 02:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:28.968 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:28.968 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:28.968 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:28.968 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:28.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:19:28.968 00:19:28.968 --- 10.0.0.2 ping statistics --- 00:19:28.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.968 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:28.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:19:28.968 00:19:28.968 --- 10.0.0.1 ping statistics --- 00:19:28.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.968 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1596483 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1596483 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1596483 ']' 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:28.968 02:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1596503 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b43d05b505658c4da2643f639fca010d188b2b938d95661d 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.MNa 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b43d05b505658c4da2643f639fca010d188b2b938d95661d 0 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b43d05b505658c4da2643f639fca010d188b2b938d95661d 0 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b43d05b505658c4da2643f639fca010d188b2b938d95661d 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.MNa 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.MNa 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.MNa 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:29.228 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=35aa672c3a573e7ec7db201a8361050cc98c9601792a44e4e86f5b58aea55518 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.MqJ 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 35aa672c3a573e7ec7db201a8361050cc98c9601792a44e4e86f5b58aea55518 3 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 35aa672c3a573e7ec7db201a8361050cc98c9601792a44e4e86f5b58aea55518 3 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=35aa672c3a573e7ec7db201a8361050cc98c9601792a44e4e86f5b58aea55518 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.MqJ 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.MqJ 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.MqJ 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=76c7b952bd8d917ce4af73e4fa4f5256 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.u4r 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 76c7b952bd8d917ce4af73e4fa4f5256 1 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 76c7b952bd8d917ce4af73e4fa4f5256 1 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=76c7b952bd8d917ce4af73e4fa4f5256 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.u4r 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.u4r 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.u4r 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3087f766b6ec281b9e9d1a8d1b4be954663a4727449094f9 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Vt5 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3087f766b6ec281b9e9d1a8d1b4be954663a4727449094f9 2 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3087f766b6ec281b9e9d1a8d1b4be954663a4727449094f9 2 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3087f766b6ec281b9e9d1a8d1b4be954663a4727449094f9 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Vt5 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Vt5 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Vt5 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=428ffba5da9d0d87543b2258ed75eb889400f9a5bf2e9f32 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.3mM 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 428ffba5da9d0d87543b2258ed75eb889400f9a5bf2e9f32 2 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 428ffba5da9d0d87543b2258ed75eb889400f9a5bf2e9f32 2 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=428ffba5da9d0d87543b2258ed75eb889400f9a5bf2e9f32 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:29.229 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.3mM 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.3mM 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.3mM 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=706bf95e3f3e75082d72e9bdf5d69ff8 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.JUT 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 706bf95e3f3e75082d72e9bdf5d69ff8 1 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 706bf95e3f3e75082d72e9bdf5d69ff8 1 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=706bf95e3f3e75082d72e9bdf5d69ff8 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.JUT 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.JUT 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.JUT 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2c47728b1ce455f399d68b1e8339fcaa7b9d981660c65bcec1d3b91f8ff6eba7 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ZzA 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2c47728b1ce455f399d68b1e8339fcaa7b9d981660c65bcec1d3b91f8ff6eba7 3 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2c47728b1ce455f399d68b1e8339fcaa7b9d981660c65bcec1d3b91f8ff6eba7 3 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2c47728b1ce455f399d68b1e8339fcaa7b9d981660c65bcec1d3b91f8ff6eba7 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:29.488 02:06:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:29.488 02:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ZzA 00:19:29.488 02:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ZzA 00:19:29.488 02:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.ZzA 00:19:29.488 02:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:29.488 02:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1596483 00:19:29.488 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1596483 ']' 00:19:29.488 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.488 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.488 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.488 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.488 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.746 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.746 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:29.746 02:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1596503 /var/tmp/host.sock 00:19:29.746 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1596503 ']' 00:19:29.746 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:29.746 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.746 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:29.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:29.746 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.746 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.004 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:30.004 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:30.004 02:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:30.004 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.004 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.004 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.005 02:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:30.005 02:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.MNa 00:19:30.005 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.005 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.005 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.005 02:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.MNa 00:19:30.005 02:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.MNa 00:19:30.263 02:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.MqJ ]] 00:19:30.263 02:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MqJ 00:19:30.263 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.263 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.263 02:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.263 02:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MqJ 00:19:30.263 02:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MqJ 00:19:30.521 02:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:30.521 02:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.u4r 00:19:30.521 02:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.521 02:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.521 02:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.521 02:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.u4r 00:19:30.521 02:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.u4r 00:19:30.779 02:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Vt5 ]] 00:19:30.779 02:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Vt5 00:19:30.779 02:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.779 02:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.780 02:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.780 02:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Vt5 00:19:30.780 02:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Vt5 00:19:31.037 02:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:31.037 02:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.3mM 00:19:31.037 02:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.037 02:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.037 02:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.037 02:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.3mM 00:19:31.037 02:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.3mM 00:19:31.295 02:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.JUT ]] 00:19:31.295 02:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JUT 00:19:31.295 02:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.295 02:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.295 02:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.295 02:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JUT 00:19:31.295 02:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JUT 00:19:31.553 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:31.553 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ZzA 00:19:31.553 02:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.553 02:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.553 02:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.553 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ZzA 00:19:31.553 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ZzA 00:19:31.811 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:31.811 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:31.811 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.811 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.811 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:31.811 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.069 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:32.069 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.069 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:32.069 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:32.069 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:32.069 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.069 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.069 02:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.069 02:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.069 02:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.069 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.069 02:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.328 00:19:32.328 02:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.328 02:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.328 02:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.586 02:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.586 02:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.586 02:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.586 02:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.586 02:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.586 02:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.586 { 00:19:32.586 "cntlid": 1, 00:19:32.586 "qid": 0, 00:19:32.586 "state": "enabled", 00:19:32.586 "thread": "nvmf_tgt_poll_group_000", 00:19:32.586 "listen_address": { 00:19:32.586 "trtype": "TCP", 00:19:32.586 "adrfam": "IPv4", 00:19:32.586 "traddr": "10.0.0.2", 00:19:32.586 "trsvcid": "4420" 00:19:32.586 }, 00:19:32.586 "peer_address": { 00:19:32.586 "trtype": "TCP", 00:19:32.586 "adrfam": "IPv4", 00:19:32.586 "traddr": "10.0.0.1", 00:19:32.586 "trsvcid": "44410" 00:19:32.586 }, 00:19:32.586 "auth": { 00:19:32.586 "state": "completed", 00:19:32.586 "digest": "sha256", 00:19:32.586 "dhgroup": "null" 00:19:32.586 } 00:19:32.586 } 00:19:32.586 ]' 00:19:32.586 02:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.844 02:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.844 02:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.844 02:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:32.844 02:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.844 02:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.844 02:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.844 02:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.101 02:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:19:34.039 02:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.039 02:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.039 02:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.039 02:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.039 02:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.039 02:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.039 02:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:34.039 02:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:34.297 02:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:34.297 02:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.297 02:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:34.297 02:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:34.297 02:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:34.297 02:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.297 02:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.297 02:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.297 02:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.297 02:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.297 02:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.297 02:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.556 00:19:34.556 02:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.556 02:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.556 02:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.814 02:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.814 02:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.814 02:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.814 02:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.814 02:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.814 02:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.814 { 00:19:34.814 "cntlid": 3, 00:19:34.814 "qid": 0, 00:19:34.814 "state": "enabled", 00:19:34.814 "thread": "nvmf_tgt_poll_group_000", 00:19:34.814 "listen_address": { 00:19:34.814 "trtype": "TCP", 00:19:34.814 "adrfam": "IPv4", 00:19:34.814 "traddr": "10.0.0.2", 00:19:34.814 "trsvcid": "4420" 00:19:34.814 }, 00:19:34.814 "peer_address": { 00:19:34.814 "trtype": "TCP", 00:19:34.814 "adrfam": "IPv4", 00:19:34.814 "traddr": "10.0.0.1", 00:19:34.814 "trsvcid": "44434" 00:19:34.814 }, 00:19:34.814 "auth": { 00:19:34.814 "state": "completed", 00:19:34.814 "digest": "sha256", 00:19:34.814 "dhgroup": "null" 00:19:34.814 } 00:19:34.814 } 00:19:34.814 ]' 00:19:34.814 02:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.814 02:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.814 02:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.814 02:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:34.814 02:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.074 02:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.074 02:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.074 02:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.333 02:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:19:36.269 02:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.269 02:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.269 02:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.269 02:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.269 02:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.269 02:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.269 02:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:36.269 02:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:36.526 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:36.526 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.526 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:36.526 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:36.526 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:36.526 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.526 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.526 02:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.526 02:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.526 02:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.526 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.526 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.783 00:19:36.783 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.783 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.783 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.041 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.041 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.041 02:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.041 02:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.041 02:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.041 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.041 { 00:19:37.041 "cntlid": 5, 00:19:37.041 "qid": 0, 00:19:37.041 "state": "enabled", 00:19:37.041 "thread": "nvmf_tgt_poll_group_000", 00:19:37.041 "listen_address": { 00:19:37.041 "trtype": "TCP", 00:19:37.041 "adrfam": "IPv4", 00:19:37.041 "traddr": "10.0.0.2", 00:19:37.041 "trsvcid": "4420" 00:19:37.041 }, 00:19:37.041 "peer_address": { 00:19:37.041 "trtype": "TCP", 00:19:37.041 "adrfam": "IPv4", 00:19:37.041 "traddr": "10.0.0.1", 00:19:37.041 "trsvcid": "44456" 00:19:37.041 }, 00:19:37.041 "auth": { 00:19:37.041 "state": "completed", 00:19:37.041 "digest": "sha256", 00:19:37.041 "dhgroup": "null" 00:19:37.041 } 00:19:37.041 } 00:19:37.041 ]' 00:19:37.041 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.041 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.041 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.299 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:37.299 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.299 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.299 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.299 02:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.556 02:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:19:38.491 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.491 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.491 02:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.491 02:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.491 02:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.491 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.491 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:38.491 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:38.748 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:38.748 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.748 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:38.748 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:38.748 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:38.748 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.748 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:38.748 02:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.748 02:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.748 02:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.748 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.748 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.006 00:19:39.006 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.006 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.006 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.264 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.264 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.264 02:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.264 02:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.264 02:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.264 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.264 { 00:19:39.264 "cntlid": 7, 00:19:39.264 "qid": 0, 00:19:39.264 "state": "enabled", 00:19:39.264 "thread": "nvmf_tgt_poll_group_000", 00:19:39.264 "listen_address": { 00:19:39.264 "trtype": "TCP", 00:19:39.264 "adrfam": "IPv4", 00:19:39.264 "traddr": "10.0.0.2", 00:19:39.264 "trsvcid": "4420" 00:19:39.264 }, 00:19:39.264 "peer_address": { 00:19:39.264 "trtype": "TCP", 00:19:39.264 "adrfam": "IPv4", 00:19:39.264 "traddr": "10.0.0.1", 00:19:39.264 "trsvcid": "44472" 00:19:39.264 }, 00:19:39.264 "auth": { 00:19:39.264 "state": "completed", 00:19:39.264 "digest": "sha256", 00:19:39.264 "dhgroup": "null" 00:19:39.264 } 00:19:39.264 } 00:19:39.264 ]' 00:19:39.264 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.522 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.522 02:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.522 02:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:39.522 02:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.522 02:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.522 02:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.522 02:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.779 02:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:19:40.717 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.717 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.717 02:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.717 02:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.717 02:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.717 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.717 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.717 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:40.717 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:40.974 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:40.974 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.974 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.974 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:40.974 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:40.974 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.975 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.975 02:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.975 02:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.975 02:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.975 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.975 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.232 00:19:41.492 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.492 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.492 02:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.492 02:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.492 02:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.492 02:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.492 02:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.492 02:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.492 02:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.492 { 00:19:41.492 "cntlid": 9, 00:19:41.492 "qid": 0, 00:19:41.492 "state": "enabled", 00:19:41.492 "thread": "nvmf_tgt_poll_group_000", 00:19:41.492 "listen_address": { 00:19:41.492 "trtype": "TCP", 00:19:41.492 "adrfam": "IPv4", 00:19:41.492 "traddr": "10.0.0.2", 00:19:41.492 "trsvcid": "4420" 00:19:41.492 }, 00:19:41.492 "peer_address": { 00:19:41.492 "trtype": "TCP", 00:19:41.492 "adrfam": "IPv4", 00:19:41.492 "traddr": "10.0.0.1", 00:19:41.492 "trsvcid": "44484" 00:19:41.492 }, 00:19:41.492 "auth": { 00:19:41.492 "state": "completed", 00:19:41.492 "digest": "sha256", 00:19:41.492 "dhgroup": "ffdhe2048" 00:19:41.492 } 00:19:41.492 } 00:19:41.492 ]' 00:19:41.492 02:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.751 02:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.751 02:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.751 02:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:41.751 02:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.751 02:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.751 02:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.751 02:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.009 02:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:19:42.946 02:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.946 02:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.946 02:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.946 02:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.946 02:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.946 02:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.946 02:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.946 02:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.203 02:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:43.203 02:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.203 02:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:43.203 02:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:43.203 02:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:43.203 02:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.203 02:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.203 02:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.203 02:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.203 02:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.203 02:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.203 02:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.460 00:19:43.717 02:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.717 02:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.717 02:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.976 02:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.976 02:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.976 02:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.976 02:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.976 02:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.976 02:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.976 { 00:19:43.976 "cntlid": 11, 00:19:43.976 "qid": 0, 00:19:43.976 "state": "enabled", 00:19:43.976 "thread": "nvmf_tgt_poll_group_000", 00:19:43.976 "listen_address": { 00:19:43.976 "trtype": "TCP", 00:19:43.976 "adrfam": "IPv4", 00:19:43.976 "traddr": "10.0.0.2", 00:19:43.976 "trsvcid": "4420" 00:19:43.976 }, 00:19:43.976 "peer_address": { 00:19:43.976 "trtype": "TCP", 00:19:43.976 "adrfam": "IPv4", 00:19:43.976 "traddr": "10.0.0.1", 00:19:43.976 "trsvcid": "58196" 00:19:43.976 }, 00:19:43.976 "auth": { 00:19:43.976 "state": "completed", 00:19:43.976 "digest": "sha256", 00:19:43.976 "dhgroup": "ffdhe2048" 00:19:43.976 } 00:19:43.976 } 00:19:43.976 ]' 00:19:43.976 02:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.976 02:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.976 02:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.976 02:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:43.976 02:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.976 02:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.976 02:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.976 02:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.233 02:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:19:45.167 02:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.167 02:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.167 02:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.167 02:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.167 02:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.167 02:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.167 02:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:45.167 02:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:45.425 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:45.425 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.425 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:45.425 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:45.425 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:45.425 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.425 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.425 02:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.425 02:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.425 02:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.425 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.425 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.684 00:19:45.684 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.684 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.684 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.943 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.943 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.943 02:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.943 02:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.202 02:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.202 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.202 { 00:19:46.202 "cntlid": 13, 00:19:46.202 "qid": 0, 00:19:46.202 "state": "enabled", 00:19:46.202 "thread": "nvmf_tgt_poll_group_000", 00:19:46.202 "listen_address": { 00:19:46.202 "trtype": "TCP", 00:19:46.202 "adrfam": "IPv4", 00:19:46.202 "traddr": "10.0.0.2", 00:19:46.202 "trsvcid": "4420" 00:19:46.202 }, 00:19:46.202 "peer_address": { 00:19:46.202 "trtype": "TCP", 00:19:46.202 "adrfam": "IPv4", 00:19:46.202 "traddr": "10.0.0.1", 00:19:46.202 "trsvcid": "58230" 00:19:46.202 }, 00:19:46.202 "auth": { 00:19:46.202 "state": "completed", 00:19:46.202 "digest": "sha256", 00:19:46.202 "dhgroup": "ffdhe2048" 00:19:46.202 } 00:19:46.202 } 00:19:46.202 ]' 00:19:46.202 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.202 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.202 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.202 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:46.202 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.202 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.202 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.202 02:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.460 02:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:19:47.395 02:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.395 02:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.395 02:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.395 02:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.395 02:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.395 02:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.395 02:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.395 02:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.654 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:47.654 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.654 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.654 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:47.654 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:47.654 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.654 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:47.654 02:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.654 02:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.654 02:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.654 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.654 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.223 00:19:48.223 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.223 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.223 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.223 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.223 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.223 02:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.223 02:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.223 02:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.223 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.223 { 00:19:48.223 "cntlid": 15, 00:19:48.223 "qid": 0, 00:19:48.223 "state": "enabled", 00:19:48.223 "thread": "nvmf_tgt_poll_group_000", 00:19:48.223 "listen_address": { 00:19:48.223 "trtype": "TCP", 00:19:48.223 "adrfam": "IPv4", 00:19:48.223 "traddr": "10.0.0.2", 00:19:48.223 "trsvcid": "4420" 00:19:48.223 }, 00:19:48.223 "peer_address": { 00:19:48.223 "trtype": "TCP", 00:19:48.223 "adrfam": "IPv4", 00:19:48.223 "traddr": "10.0.0.1", 00:19:48.223 "trsvcid": "58268" 00:19:48.223 }, 00:19:48.223 "auth": { 00:19:48.223 "state": "completed", 00:19:48.223 "digest": "sha256", 00:19:48.223 "dhgroup": "ffdhe2048" 00:19:48.223 } 00:19:48.223 } 00:19:48.223 ]' 00:19:48.223 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.481 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.481 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.481 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:48.481 02:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.481 02:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.481 02:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.481 02:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.739 02:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:19:49.706 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.706 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.706 02:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.706 02:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.706 02:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.706 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.706 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.706 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.706 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.965 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:49.965 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.965 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.965 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:49.965 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:49.965 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.965 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.965 02:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.965 02:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.965 02:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.965 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.965 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.535 00:19:50.535 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.535 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.535 02:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.535 02:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.535 02:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.535 02:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.535 02:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.535 02:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.535 02:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.535 { 00:19:50.535 "cntlid": 17, 00:19:50.535 "qid": 0, 00:19:50.535 "state": "enabled", 00:19:50.535 "thread": "nvmf_tgt_poll_group_000", 00:19:50.535 "listen_address": { 00:19:50.535 "trtype": "TCP", 00:19:50.535 "adrfam": "IPv4", 00:19:50.535 "traddr": "10.0.0.2", 00:19:50.535 "trsvcid": "4420" 00:19:50.535 }, 00:19:50.535 "peer_address": { 00:19:50.535 "trtype": "TCP", 00:19:50.535 "adrfam": "IPv4", 00:19:50.535 "traddr": "10.0.0.1", 00:19:50.535 "trsvcid": "58282" 00:19:50.535 }, 00:19:50.535 "auth": { 00:19:50.535 "state": "completed", 00:19:50.535 "digest": "sha256", 00:19:50.535 "dhgroup": "ffdhe3072" 00:19:50.535 } 00:19:50.535 } 00:19:50.535 ]' 00:19:50.535 02:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.794 02:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.794 02:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.794 02:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:50.794 02:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.794 02:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.794 02:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.794 02:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.052 02:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:19:51.990 02:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.990 02:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.990 02:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.990 02:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.990 02:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.990 02:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.990 02:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:51.990 02:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.249 02:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:52.249 02:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.249 02:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:52.249 02:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:52.249 02:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:52.249 02:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.249 02:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.249 02:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.249 02:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.249 02:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.249 02:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.249 02:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.817 00:19:52.817 02:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.817 02:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.817 02:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.817 02:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.817 02:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.818 02:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.818 02:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.818 02:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.818 02:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.818 { 00:19:52.818 "cntlid": 19, 00:19:52.818 "qid": 0, 00:19:52.818 "state": "enabled", 00:19:52.818 "thread": "nvmf_tgt_poll_group_000", 00:19:52.818 "listen_address": { 00:19:52.818 "trtype": "TCP", 00:19:52.818 "adrfam": "IPv4", 00:19:52.818 "traddr": "10.0.0.2", 00:19:52.818 "trsvcid": "4420" 00:19:52.818 }, 00:19:52.818 "peer_address": { 00:19:52.818 "trtype": "TCP", 00:19:52.818 "adrfam": "IPv4", 00:19:52.818 "traddr": "10.0.0.1", 00:19:52.818 "trsvcid": "35262" 00:19:52.818 }, 00:19:52.818 "auth": { 00:19:52.818 "state": "completed", 00:19:52.818 "digest": "sha256", 00:19:52.818 "dhgroup": "ffdhe3072" 00:19:52.818 } 00:19:52.818 } 00:19:52.818 ]' 00:19:52.818 02:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.076 02:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.076 02:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.076 02:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:53.076 02:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.076 02:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.076 02:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.076 02:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.334 02:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:19:54.272 02:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.272 02:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.272 02:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.272 02:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.272 02:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.272 02:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.272 02:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.272 02:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.530 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:54.530 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.530 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:54.530 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:54.530 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:54.530 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.530 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.530 02:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.530 02:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.530 02:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.530 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.530 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.099 00:19:55.099 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.099 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.099 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.099 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.357 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.357 02:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.357 02:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.357 02:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.357 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.357 { 00:19:55.357 "cntlid": 21, 00:19:55.357 "qid": 0, 00:19:55.357 "state": "enabled", 00:19:55.357 "thread": "nvmf_tgt_poll_group_000", 00:19:55.357 "listen_address": { 00:19:55.357 "trtype": "TCP", 00:19:55.357 "adrfam": "IPv4", 00:19:55.357 "traddr": "10.0.0.2", 00:19:55.357 "trsvcid": "4420" 00:19:55.357 }, 00:19:55.357 "peer_address": { 00:19:55.357 "trtype": "TCP", 00:19:55.357 "adrfam": "IPv4", 00:19:55.357 "traddr": "10.0.0.1", 00:19:55.357 "trsvcid": "35300" 00:19:55.357 }, 00:19:55.357 "auth": { 00:19:55.357 "state": "completed", 00:19:55.357 "digest": "sha256", 00:19:55.357 "dhgroup": "ffdhe3072" 00:19:55.357 } 00:19:55.357 } 00:19:55.357 ]' 00:19:55.357 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.358 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.358 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.358 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:55.358 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.358 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.358 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.358 02:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.616 02:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:19:56.552 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.552 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.552 02:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.552 02:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.552 02:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.552 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.552 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:56.552 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:56.810 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:56.810 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.810 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:56.810 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:56.810 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:56.810 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.810 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:56.810 02:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.810 02:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.810 02:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.810 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.810 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.377 00:19:57.377 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.377 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.377 02:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.636 02:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.636 02:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.636 02:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.636 02:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.636 02:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.636 02:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.636 { 00:19:57.636 "cntlid": 23, 00:19:57.636 "qid": 0, 00:19:57.636 "state": "enabled", 00:19:57.636 "thread": "nvmf_tgt_poll_group_000", 00:19:57.636 "listen_address": { 00:19:57.636 "trtype": "TCP", 00:19:57.636 "adrfam": "IPv4", 00:19:57.636 "traddr": "10.0.0.2", 00:19:57.636 "trsvcid": "4420" 00:19:57.636 }, 00:19:57.636 "peer_address": { 00:19:57.636 "trtype": "TCP", 00:19:57.636 "adrfam": "IPv4", 00:19:57.636 "traddr": "10.0.0.1", 00:19:57.636 "trsvcid": "35316" 00:19:57.636 }, 00:19:57.636 "auth": { 00:19:57.636 "state": "completed", 00:19:57.636 "digest": "sha256", 00:19:57.636 "dhgroup": "ffdhe3072" 00:19:57.636 } 00:19:57.636 } 00:19:57.636 ]' 00:19:57.636 02:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.636 02:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.636 02:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.636 02:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:57.636 02:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.636 02:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.636 02:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.636 02:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.894 02:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:19:58.825 02:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.826 02:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.826 02:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.826 02:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.826 02:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.826 02:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.826 02:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.826 02:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.826 02:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.082 02:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:59.082 02:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.082 02:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:59.082 02:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:59.082 02:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:59.082 02:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.082 02:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.082 02:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.082 02:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.082 02:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.082 02:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.082 02:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.648 00:19:59.648 02:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.648 02:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.648 02:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.906 02:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.906 02:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.906 02:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.906 02:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.906 02:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.906 02:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.906 { 00:19:59.906 "cntlid": 25, 00:19:59.906 "qid": 0, 00:19:59.906 "state": "enabled", 00:19:59.906 "thread": "nvmf_tgt_poll_group_000", 00:19:59.906 "listen_address": { 00:19:59.906 "trtype": "TCP", 00:19:59.906 "adrfam": "IPv4", 00:19:59.906 "traddr": "10.0.0.2", 00:19:59.906 "trsvcid": "4420" 00:19:59.906 }, 00:19:59.906 "peer_address": { 00:19:59.906 "trtype": "TCP", 00:19:59.906 "adrfam": "IPv4", 00:19:59.906 "traddr": "10.0.0.1", 00:19:59.906 "trsvcid": "35332" 00:19:59.906 }, 00:19:59.906 "auth": { 00:19:59.906 "state": "completed", 00:19:59.906 "digest": "sha256", 00:19:59.906 "dhgroup": "ffdhe4096" 00:19:59.906 } 00:19:59.906 } 00:19:59.906 ]' 00:19:59.906 02:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.906 02:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.907 02:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.907 02:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:59.907 02:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.907 02:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.907 02:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.907 02:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.166 02:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:20:01.543 02:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.543 02:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.543 02:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.543 02:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.543 02:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.543 02:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.543 02:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:01.543 02:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:01.543 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:01.543 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.543 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:01.543 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:01.543 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:01.543 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.543 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.543 02:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.543 02:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.543 02:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.543 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.543 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.108 00:20:02.108 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.108 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.108 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.108 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.108 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.108 02:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.108 02:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.108 02:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.108 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.108 { 00:20:02.108 "cntlid": 27, 00:20:02.108 "qid": 0, 00:20:02.108 "state": "enabled", 00:20:02.108 "thread": "nvmf_tgt_poll_group_000", 00:20:02.108 "listen_address": { 00:20:02.108 "trtype": "TCP", 00:20:02.108 "adrfam": "IPv4", 00:20:02.108 "traddr": "10.0.0.2", 00:20:02.108 "trsvcid": "4420" 00:20:02.108 }, 00:20:02.108 "peer_address": { 00:20:02.108 "trtype": "TCP", 00:20:02.108 "adrfam": "IPv4", 00:20:02.108 "traddr": "10.0.0.1", 00:20:02.108 "trsvcid": "35360" 00:20:02.108 }, 00:20:02.108 "auth": { 00:20:02.108 "state": "completed", 00:20:02.108 "digest": "sha256", 00:20:02.108 "dhgroup": "ffdhe4096" 00:20:02.108 } 00:20:02.108 } 00:20:02.108 ]' 00:20:02.364 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.364 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.365 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.365 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:02.365 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.365 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.365 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.365 02:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.621 02:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:20:03.572 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.572 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.572 02:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.572 02:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.572 02:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.572 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.572 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:03.572 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:03.829 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:03.829 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.829 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:03.829 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:03.829 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:03.829 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.829 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.829 02:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.829 02:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.829 02:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.829 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.829 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.395 00:20:04.395 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.395 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.395 02:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.653 02:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.653 02:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.653 02:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.653 02:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.653 02:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.653 02:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.653 { 00:20:04.653 "cntlid": 29, 00:20:04.653 "qid": 0, 00:20:04.653 "state": "enabled", 00:20:04.653 "thread": "nvmf_tgt_poll_group_000", 00:20:04.653 "listen_address": { 00:20:04.653 "trtype": "TCP", 00:20:04.653 "adrfam": "IPv4", 00:20:04.653 "traddr": "10.0.0.2", 00:20:04.653 "trsvcid": "4420" 00:20:04.653 }, 00:20:04.653 "peer_address": { 00:20:04.653 "trtype": "TCP", 00:20:04.653 "adrfam": "IPv4", 00:20:04.653 "traddr": "10.0.0.1", 00:20:04.653 "trsvcid": "37140" 00:20:04.653 }, 00:20:04.653 "auth": { 00:20:04.653 "state": "completed", 00:20:04.653 "digest": "sha256", 00:20:04.653 "dhgroup": "ffdhe4096" 00:20:04.653 } 00:20:04.653 } 00:20:04.653 ]' 00:20:04.653 02:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.653 02:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.653 02:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.653 02:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:04.653 02:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.653 02:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.654 02:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.654 02:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.911 02:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:20:05.844 02:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.845 02:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.845 02:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.845 02:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.845 02:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.845 02:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.845 02:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:05.845 02:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:06.104 02:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:06.104 02:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.104 02:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:06.104 02:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:06.104 02:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:06.104 02:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.104 02:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:06.104 02:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.104 02:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.363 02:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.363 02:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.363 02:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.621 00:20:06.621 02:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.621 02:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.621 02:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.880 02:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.880 02:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.880 02:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.880 02:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.880 02:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.880 02:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.880 { 00:20:06.880 "cntlid": 31, 00:20:06.880 "qid": 0, 00:20:06.880 "state": "enabled", 00:20:06.880 "thread": "nvmf_tgt_poll_group_000", 00:20:06.880 "listen_address": { 00:20:06.880 "trtype": "TCP", 00:20:06.880 "adrfam": "IPv4", 00:20:06.880 "traddr": "10.0.0.2", 00:20:06.880 "trsvcid": "4420" 00:20:06.880 }, 00:20:06.880 "peer_address": { 00:20:06.880 "trtype": "TCP", 00:20:06.880 "adrfam": "IPv4", 00:20:06.880 "traddr": "10.0.0.1", 00:20:06.880 "trsvcid": "37168" 00:20:06.880 }, 00:20:06.880 "auth": { 00:20:06.880 "state": "completed", 00:20:06.880 "digest": "sha256", 00:20:06.880 "dhgroup": "ffdhe4096" 00:20:06.880 } 00:20:06.880 } 00:20:06.880 ]' 00:20:06.880 02:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.880 02:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.880 02:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.880 02:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:06.880 02:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.880 02:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.880 02:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.880 02:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.141 02:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:20:08.515 02:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.515 02:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.515 02:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.515 02:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.515 02:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.515 02:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.515 02:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.515 02:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:08.515 02:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:08.515 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:08.515 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.515 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:08.515 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:08.515 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:08.515 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.515 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.515 02:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.515 02:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.515 02:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.515 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.515 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.082 00:20:09.082 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.082 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.082 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.340 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.340 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.340 02:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.340 02:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.340 02:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.340 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.340 { 00:20:09.340 "cntlid": 33, 00:20:09.340 "qid": 0, 00:20:09.340 "state": "enabled", 00:20:09.340 "thread": "nvmf_tgt_poll_group_000", 00:20:09.340 "listen_address": { 00:20:09.340 "trtype": "TCP", 00:20:09.340 "adrfam": "IPv4", 00:20:09.340 "traddr": "10.0.0.2", 00:20:09.340 "trsvcid": "4420" 00:20:09.340 }, 00:20:09.340 "peer_address": { 00:20:09.340 "trtype": "TCP", 00:20:09.340 "adrfam": "IPv4", 00:20:09.340 "traddr": "10.0.0.1", 00:20:09.340 "trsvcid": "37202" 00:20:09.340 }, 00:20:09.340 "auth": { 00:20:09.340 "state": "completed", 00:20:09.340 "digest": "sha256", 00:20:09.340 "dhgroup": "ffdhe6144" 00:20:09.340 } 00:20:09.340 } 00:20:09.340 ]' 00:20:09.340 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.340 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.340 02:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.340 02:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:09.340 02:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.598 02:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.598 02:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.598 02:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.858 02:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:20:10.796 02:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.796 02:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.796 02:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.796 02:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.796 02:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.796 02:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.797 02:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:10.797 02:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:11.055 02:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:11.055 02:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.055 02:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:11.055 02:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:11.055 02:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:11.055 02:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.055 02:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.055 02:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.055 02:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.055 02:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.055 02:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.055 02:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.621 00:20:11.621 02:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.621 02:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.621 02:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.879 02:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.879 02:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.879 02:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.879 02:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.879 02:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.879 02:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.879 { 00:20:11.879 "cntlid": 35, 00:20:11.879 "qid": 0, 00:20:11.879 "state": "enabled", 00:20:11.879 "thread": "nvmf_tgt_poll_group_000", 00:20:11.879 "listen_address": { 00:20:11.879 "trtype": "TCP", 00:20:11.879 "adrfam": "IPv4", 00:20:11.879 "traddr": "10.0.0.2", 00:20:11.879 "trsvcid": "4420" 00:20:11.879 }, 00:20:11.879 "peer_address": { 00:20:11.879 "trtype": "TCP", 00:20:11.879 "adrfam": "IPv4", 00:20:11.879 "traddr": "10.0.0.1", 00:20:11.879 "trsvcid": "37238" 00:20:11.879 }, 00:20:11.879 "auth": { 00:20:11.879 "state": "completed", 00:20:11.879 "digest": "sha256", 00:20:11.879 "dhgroup": "ffdhe6144" 00:20:11.879 } 00:20:11.879 } 00:20:11.879 ]' 00:20:11.879 02:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.879 02:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.879 02:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.879 02:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:11.879 02:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.879 02:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.879 02:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.879 02:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.139 02:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:20:13.077 02:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.077 02:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.077 02:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.077 02:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.077 02:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.077 02:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.077 02:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:13.077 02:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:13.336 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:13.336 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.336 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:13.336 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:13.336 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:13.336 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.336 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.336 02:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.336 02:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.336 02:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.336 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.336 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.904 00:20:13.904 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.904 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.904 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.162 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.162 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.162 02:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.162 02:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.162 02:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.162 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.162 { 00:20:14.162 "cntlid": 37, 00:20:14.162 "qid": 0, 00:20:14.162 "state": "enabled", 00:20:14.162 "thread": "nvmf_tgt_poll_group_000", 00:20:14.162 "listen_address": { 00:20:14.162 "trtype": "TCP", 00:20:14.162 "adrfam": "IPv4", 00:20:14.162 "traddr": "10.0.0.2", 00:20:14.162 "trsvcid": "4420" 00:20:14.162 }, 00:20:14.162 "peer_address": { 00:20:14.162 "trtype": "TCP", 00:20:14.162 "adrfam": "IPv4", 00:20:14.162 "traddr": "10.0.0.1", 00:20:14.162 "trsvcid": "48822" 00:20:14.162 }, 00:20:14.162 "auth": { 00:20:14.162 "state": "completed", 00:20:14.162 "digest": "sha256", 00:20:14.162 "dhgroup": "ffdhe6144" 00:20:14.162 } 00:20:14.162 } 00:20:14.162 ]' 00:20:14.162 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.421 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.421 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.421 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.421 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.421 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.421 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.421 02:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.678 02:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:20:15.610 02:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.610 02:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.610 02:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.610 02:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.610 02:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.610 02:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.610 02:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.610 02:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.868 02:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:15.868 02:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.868 02:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:15.868 02:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:15.868 02:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:15.868 02:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.868 02:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:15.868 02:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.868 02:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.868 02:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.868 02:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.868 02:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.462 00:20:16.462 02:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.462 02:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.462 02:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.724 02:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.724 02:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.724 02:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.724 02:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.724 02:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.724 02:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.724 { 00:20:16.724 "cntlid": 39, 00:20:16.724 "qid": 0, 00:20:16.724 "state": "enabled", 00:20:16.724 "thread": "nvmf_tgt_poll_group_000", 00:20:16.724 "listen_address": { 00:20:16.724 "trtype": "TCP", 00:20:16.724 "adrfam": "IPv4", 00:20:16.724 "traddr": "10.0.0.2", 00:20:16.724 "trsvcid": "4420" 00:20:16.724 }, 00:20:16.724 "peer_address": { 00:20:16.724 "trtype": "TCP", 00:20:16.724 "adrfam": "IPv4", 00:20:16.724 "traddr": "10.0.0.1", 00:20:16.724 "trsvcid": "48844" 00:20:16.724 }, 00:20:16.724 "auth": { 00:20:16.724 "state": "completed", 00:20:16.724 "digest": "sha256", 00:20:16.724 "dhgroup": "ffdhe6144" 00:20:16.724 } 00:20:16.724 } 00:20:16.724 ]' 00:20:16.724 02:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.724 02:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.724 02:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.724 02:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.724 02:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.724 02:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.724 02:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.724 02:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.983 02:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.359 02:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.297 00:20:19.297 02:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.297 02:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.297 02:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.555 02:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.555 02:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.555 02:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.555 02:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.555 02:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.555 02:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.555 { 00:20:19.555 "cntlid": 41, 00:20:19.555 "qid": 0, 00:20:19.555 "state": "enabled", 00:20:19.555 "thread": "nvmf_tgt_poll_group_000", 00:20:19.555 "listen_address": { 00:20:19.555 "trtype": "TCP", 00:20:19.555 "adrfam": "IPv4", 00:20:19.555 "traddr": "10.0.0.2", 00:20:19.555 "trsvcid": "4420" 00:20:19.555 }, 00:20:19.555 "peer_address": { 00:20:19.555 "trtype": "TCP", 00:20:19.555 "adrfam": "IPv4", 00:20:19.555 "traddr": "10.0.0.1", 00:20:19.555 "trsvcid": "48882" 00:20:19.555 }, 00:20:19.555 "auth": { 00:20:19.555 "state": "completed", 00:20:19.555 "digest": "sha256", 00:20:19.555 "dhgroup": "ffdhe8192" 00:20:19.555 } 00:20:19.555 } 00:20:19.555 ]' 00:20:19.555 02:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.555 02:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.555 02:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.555 02:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:19.555 02:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.555 02:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.555 02:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.555 02:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.813 02:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:20:20.752 02:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.752 02:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.752 02:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.752 02:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.752 02:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.752 02:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.752 02:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.752 02:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.010 02:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:21.010 02:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.010 02:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:21.010 02:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:21.010 02:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:21.010 02:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.010 02:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.010 02:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.010 02:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.010 02:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.010 02:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.010 02:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.944 00:20:21.944 02:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.944 02:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.944 02:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.202 02:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.202 02:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.202 02:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.202 02:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.202 02:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.202 02:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.202 { 00:20:22.202 "cntlid": 43, 00:20:22.202 "qid": 0, 00:20:22.202 "state": "enabled", 00:20:22.202 "thread": "nvmf_tgt_poll_group_000", 00:20:22.202 "listen_address": { 00:20:22.202 "trtype": "TCP", 00:20:22.202 "adrfam": "IPv4", 00:20:22.202 "traddr": "10.0.0.2", 00:20:22.202 "trsvcid": "4420" 00:20:22.202 }, 00:20:22.202 "peer_address": { 00:20:22.202 "trtype": "TCP", 00:20:22.202 "adrfam": "IPv4", 00:20:22.202 "traddr": "10.0.0.1", 00:20:22.202 "trsvcid": "48892" 00:20:22.202 }, 00:20:22.202 "auth": { 00:20:22.202 "state": "completed", 00:20:22.202 "digest": "sha256", 00:20:22.202 "dhgroup": "ffdhe8192" 00:20:22.202 } 00:20:22.202 } 00:20:22.202 ]' 00:20:22.202 02:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.202 02:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.202 02:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.460 02:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:22.460 02:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.460 02:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.460 02:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.460 02:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.718 02:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:20:23.653 02:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.653 02:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.653 02:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.653 02:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.653 02:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.653 02:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.653 02:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.653 02:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.911 02:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:23.911 02:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.911 02:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:23.911 02:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:23.911 02:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:23.911 02:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.911 02:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.911 02:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.911 02:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.911 02:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.911 02:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.912 02:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.847 00:20:24.847 02:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.847 02:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.847 02:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.847 02:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.847 02:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.847 02:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.847 02:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.847 02:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.847 02:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.847 { 00:20:24.847 "cntlid": 45, 00:20:24.847 "qid": 0, 00:20:24.847 "state": "enabled", 00:20:24.847 "thread": "nvmf_tgt_poll_group_000", 00:20:24.847 "listen_address": { 00:20:24.847 "trtype": "TCP", 00:20:24.847 "adrfam": "IPv4", 00:20:24.847 "traddr": "10.0.0.2", 00:20:24.847 "trsvcid": "4420" 00:20:24.847 }, 00:20:24.847 "peer_address": { 00:20:24.847 "trtype": "TCP", 00:20:24.847 "adrfam": "IPv4", 00:20:24.847 "traddr": "10.0.0.1", 00:20:24.847 "trsvcid": "42118" 00:20:24.847 }, 00:20:24.847 "auth": { 00:20:24.847 "state": "completed", 00:20:24.847 "digest": "sha256", 00:20:24.847 "dhgroup": "ffdhe8192" 00:20:24.847 } 00:20:24.847 } 00:20:24.847 ]' 00:20:24.848 02:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.105 02:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.105 02:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.105 02:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:25.105 02:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.105 02:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.105 02:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.105 02:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.363 02:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:20:26.300 02:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.300 02:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.300 02:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.300 02:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.300 02:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.300 02:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.300 02:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.300 02:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.557 02:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:26.557 02:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.558 02:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:26.558 02:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:26.558 02:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:26.558 02:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.558 02:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:26.558 02:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.558 02:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.558 02:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.558 02:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.558 02:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.494 00:20:27.494 02:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.494 02:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.494 02:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.751 02:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.751 02:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.751 02:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.751 02:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.751 02:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.751 02:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.751 { 00:20:27.751 "cntlid": 47, 00:20:27.751 "qid": 0, 00:20:27.751 "state": "enabled", 00:20:27.751 "thread": "nvmf_tgt_poll_group_000", 00:20:27.751 "listen_address": { 00:20:27.751 "trtype": "TCP", 00:20:27.751 "adrfam": "IPv4", 00:20:27.751 "traddr": "10.0.0.2", 00:20:27.751 "trsvcid": "4420" 00:20:27.751 }, 00:20:27.751 "peer_address": { 00:20:27.751 "trtype": "TCP", 00:20:27.751 "adrfam": "IPv4", 00:20:27.751 "traddr": "10.0.0.1", 00:20:27.751 "trsvcid": "42152" 00:20:27.751 }, 00:20:27.751 "auth": { 00:20:27.751 "state": "completed", 00:20:27.751 "digest": "sha256", 00:20:27.751 "dhgroup": "ffdhe8192" 00:20:27.751 } 00:20:27.751 } 00:20:27.751 ]' 00:20:27.751 02:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.751 02:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.751 02:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.007 02:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.007 02:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.007 02:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.007 02:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.007 02:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.266 02:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:20:29.204 02:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.204 02:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.204 02:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.204 02:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.204 02:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.204 02:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:29.204 02:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.204 02:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.204 02:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.205 02:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.463 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:29.463 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.463 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.463 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:29.463 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:29.463 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.463 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.463 02:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.463 02:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.463 02:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.463 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.463 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.721 00:20:29.721 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.721 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.721 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.979 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.979 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.979 02:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.979 02:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.979 02:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.979 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.979 { 00:20:29.979 "cntlid": 49, 00:20:29.979 "qid": 0, 00:20:29.979 "state": "enabled", 00:20:29.979 "thread": "nvmf_tgt_poll_group_000", 00:20:29.979 "listen_address": { 00:20:29.979 "trtype": "TCP", 00:20:29.979 "adrfam": "IPv4", 00:20:29.979 "traddr": "10.0.0.2", 00:20:29.979 "trsvcid": "4420" 00:20:29.979 }, 00:20:29.979 "peer_address": { 00:20:29.979 "trtype": "TCP", 00:20:29.979 "adrfam": "IPv4", 00:20:29.979 "traddr": "10.0.0.1", 00:20:29.979 "trsvcid": "42170" 00:20:29.979 }, 00:20:29.979 "auth": { 00:20:29.979 "state": "completed", 00:20:29.979 "digest": "sha384", 00:20:29.979 "dhgroup": "null" 00:20:29.979 } 00:20:29.979 } 00:20:29.979 ]' 00:20:29.979 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.237 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.237 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.237 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:30.237 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.237 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.237 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.237 02:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.494 02:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:20:31.466 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.466 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.466 02:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.466 02:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.466 02:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.466 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.466 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.466 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.724 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:31.724 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.724 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.724 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:31.724 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:31.724 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.724 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.724 02:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.724 02:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.724 02:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.724 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.724 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.982 00:20:31.982 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.982 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.982 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.241 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.241 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.241 02:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.241 02:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.241 02:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.241 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.241 { 00:20:32.241 "cntlid": 51, 00:20:32.241 "qid": 0, 00:20:32.241 "state": "enabled", 00:20:32.241 "thread": "nvmf_tgt_poll_group_000", 00:20:32.241 "listen_address": { 00:20:32.241 "trtype": "TCP", 00:20:32.241 "adrfam": "IPv4", 00:20:32.241 "traddr": "10.0.0.2", 00:20:32.241 "trsvcid": "4420" 00:20:32.241 }, 00:20:32.241 "peer_address": { 00:20:32.241 "trtype": "TCP", 00:20:32.241 "adrfam": "IPv4", 00:20:32.241 "traddr": "10.0.0.1", 00:20:32.241 "trsvcid": "42188" 00:20:32.241 }, 00:20:32.241 "auth": { 00:20:32.241 "state": "completed", 00:20:32.241 "digest": "sha384", 00:20:32.241 "dhgroup": "null" 00:20:32.241 } 00:20:32.241 } 00:20:32.241 ]' 00:20:32.242 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.242 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.242 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.500 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:32.500 02:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.500 02:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.500 02:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.500 02:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.758 02:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:20:33.693 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.693 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.693 02:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.693 02:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.693 02:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.693 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.693 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:33.693 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:33.951 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:33.951 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.951 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:33.951 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:33.951 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:33.951 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.951 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.951 02:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.951 02:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.951 02:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.951 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.951 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.209 00:20:34.209 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.209 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.209 02:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.467 02:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.467 02:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.467 02:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.467 02:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.467 02:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.467 02:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.467 { 00:20:34.467 "cntlid": 53, 00:20:34.467 "qid": 0, 00:20:34.467 "state": "enabled", 00:20:34.467 "thread": "nvmf_tgt_poll_group_000", 00:20:34.467 "listen_address": { 00:20:34.467 "trtype": "TCP", 00:20:34.467 "adrfam": "IPv4", 00:20:34.467 "traddr": "10.0.0.2", 00:20:34.467 "trsvcid": "4420" 00:20:34.467 }, 00:20:34.467 "peer_address": { 00:20:34.467 "trtype": "TCP", 00:20:34.467 "adrfam": "IPv4", 00:20:34.467 "traddr": "10.0.0.1", 00:20:34.467 "trsvcid": "55356" 00:20:34.467 }, 00:20:34.467 "auth": { 00:20:34.467 "state": "completed", 00:20:34.467 "digest": "sha384", 00:20:34.467 "dhgroup": "null" 00:20:34.467 } 00:20:34.467 } 00:20:34.467 ]' 00:20:34.467 02:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.725 02:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.725 02:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.725 02:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:34.725 02:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.725 02:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.725 02:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.725 02:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.983 02:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:20:35.919 02:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.919 02:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.919 02:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.919 02:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.919 02:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.919 02:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.919 02:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:35.919 02:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:36.176 02:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:36.176 02:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.176 02:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.176 02:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:36.176 02:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:36.176 02:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.176 02:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:36.176 02:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.176 02:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.176 02:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.176 02:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.176 02:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.435 00:20:36.435 02:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.435 02:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.435 02:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.693 02:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.693 02:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.693 02:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.693 02:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.693 02:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.693 02:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.693 { 00:20:36.693 "cntlid": 55, 00:20:36.693 "qid": 0, 00:20:36.693 "state": "enabled", 00:20:36.693 "thread": "nvmf_tgt_poll_group_000", 00:20:36.693 "listen_address": { 00:20:36.693 "trtype": "TCP", 00:20:36.693 "adrfam": "IPv4", 00:20:36.693 "traddr": "10.0.0.2", 00:20:36.693 "trsvcid": "4420" 00:20:36.693 }, 00:20:36.693 "peer_address": { 00:20:36.693 "trtype": "TCP", 00:20:36.693 "adrfam": "IPv4", 00:20:36.693 "traddr": "10.0.0.1", 00:20:36.693 "trsvcid": "55378" 00:20:36.693 }, 00:20:36.693 "auth": { 00:20:36.693 "state": "completed", 00:20:36.693 "digest": "sha384", 00:20:36.693 "dhgroup": "null" 00:20:36.693 } 00:20:36.693 } 00:20:36.693 ]' 00:20:36.693 02:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.693 02:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.693 02:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.693 02:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:36.693 02:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.951 02:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.951 02:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.951 02:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.209 02:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:20:38.145 02:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.145 02:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.145 02:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.145 02:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.145 02:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.145 02:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.145 02:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.145 02:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:38.145 02:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:38.404 02:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:38.404 02:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.404 02:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.404 02:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:38.404 02:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:38.404 02:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.404 02:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.404 02:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.404 02:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.404 02:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.404 02:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.404 02:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.662 00:20:38.662 02:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.662 02:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.662 02:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.920 02:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.920 02:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.920 02:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.920 02:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.920 02:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.920 02:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.920 { 00:20:38.920 "cntlid": 57, 00:20:38.920 "qid": 0, 00:20:38.920 "state": "enabled", 00:20:38.920 "thread": "nvmf_tgt_poll_group_000", 00:20:38.920 "listen_address": { 00:20:38.920 "trtype": "TCP", 00:20:38.920 "adrfam": "IPv4", 00:20:38.920 "traddr": "10.0.0.2", 00:20:38.920 "trsvcid": "4420" 00:20:38.920 }, 00:20:38.920 "peer_address": { 00:20:38.920 "trtype": "TCP", 00:20:38.920 "adrfam": "IPv4", 00:20:38.920 "traddr": "10.0.0.1", 00:20:38.920 "trsvcid": "55410" 00:20:38.920 }, 00:20:38.920 "auth": { 00:20:38.920 "state": "completed", 00:20:38.920 "digest": "sha384", 00:20:38.920 "dhgroup": "ffdhe2048" 00:20:38.920 } 00:20:38.920 } 00:20:38.920 ]' 00:20:38.920 02:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.179 02:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.179 02:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.179 02:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:39.179 02:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.179 02:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.179 02:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.179 02:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.437 02:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:20:40.371 02:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.371 02:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.371 02:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.371 02:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.371 02:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.371 02:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.371 02:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:40.371 02:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:40.629 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:40.629 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.629 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.629 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:40.629 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:40.629 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.629 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.629 02:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.629 02:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.629 02:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.630 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.630 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.887 00:20:40.887 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.887 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.887 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.146 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.146 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.146 02:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.146 02:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.403 02:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.403 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.403 { 00:20:41.403 "cntlid": 59, 00:20:41.403 "qid": 0, 00:20:41.403 "state": "enabled", 00:20:41.403 "thread": "nvmf_tgt_poll_group_000", 00:20:41.403 "listen_address": { 00:20:41.403 "trtype": "TCP", 00:20:41.403 "adrfam": "IPv4", 00:20:41.403 "traddr": "10.0.0.2", 00:20:41.403 "trsvcid": "4420" 00:20:41.403 }, 00:20:41.403 "peer_address": { 00:20:41.403 "trtype": "TCP", 00:20:41.403 "adrfam": "IPv4", 00:20:41.403 "traddr": "10.0.0.1", 00:20:41.403 "trsvcid": "55448" 00:20:41.403 }, 00:20:41.403 "auth": { 00:20:41.403 "state": "completed", 00:20:41.403 "digest": "sha384", 00:20:41.403 "dhgroup": "ffdhe2048" 00:20:41.403 } 00:20:41.403 } 00:20:41.403 ]' 00:20:41.403 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.403 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.403 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.403 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:41.403 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.403 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.403 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.403 02:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.661 02:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:20:42.600 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.600 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.600 02:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.600 02:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.600 02:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.600 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.600 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:42.600 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:42.858 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:42.859 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.859 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:42.859 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:42.859 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:42.859 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.859 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.859 02:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.859 02:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.859 02:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.859 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.859 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.117 00:20:43.377 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.377 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.377 02:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.636 02:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.636 02:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.636 02:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.636 02:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.636 02:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.636 02:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.636 { 00:20:43.636 "cntlid": 61, 00:20:43.636 "qid": 0, 00:20:43.636 "state": "enabled", 00:20:43.636 "thread": "nvmf_tgt_poll_group_000", 00:20:43.636 "listen_address": { 00:20:43.636 "trtype": "TCP", 00:20:43.636 "adrfam": "IPv4", 00:20:43.636 "traddr": "10.0.0.2", 00:20:43.636 "trsvcid": "4420" 00:20:43.636 }, 00:20:43.636 "peer_address": { 00:20:43.636 "trtype": "TCP", 00:20:43.636 "adrfam": "IPv4", 00:20:43.637 "traddr": "10.0.0.1", 00:20:43.637 "trsvcid": "50336" 00:20:43.637 }, 00:20:43.637 "auth": { 00:20:43.637 "state": "completed", 00:20:43.637 "digest": "sha384", 00:20:43.637 "dhgroup": "ffdhe2048" 00:20:43.637 } 00:20:43.637 } 00:20:43.637 ]' 00:20:43.637 02:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.637 02:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.637 02:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.637 02:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:43.637 02:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.637 02:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.637 02:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.637 02:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.895 02:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:20:44.859 02:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.859 02:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.859 02:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.859 02:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.859 02:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.859 02:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.859 02:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.859 02:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:45.117 02:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:45.117 02:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.117 02:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:45.117 02:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:45.117 02:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:45.117 02:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.117 02:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:45.117 02:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.117 02:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.117 02:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.117 02:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:45.117 02:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:45.693 00:20:45.693 02:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.693 02:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.693 02:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.951 02:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.951 02:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.951 02:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.951 02:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.951 02:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.951 02:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.951 { 00:20:45.951 "cntlid": 63, 00:20:45.951 "qid": 0, 00:20:45.951 "state": "enabled", 00:20:45.951 "thread": "nvmf_tgt_poll_group_000", 00:20:45.951 "listen_address": { 00:20:45.951 "trtype": "TCP", 00:20:45.951 "adrfam": "IPv4", 00:20:45.951 "traddr": "10.0.0.2", 00:20:45.951 "trsvcid": "4420" 00:20:45.951 }, 00:20:45.951 "peer_address": { 00:20:45.951 "trtype": "TCP", 00:20:45.951 "adrfam": "IPv4", 00:20:45.951 "traddr": "10.0.0.1", 00:20:45.951 "trsvcid": "50380" 00:20:45.951 }, 00:20:45.951 "auth": { 00:20:45.951 "state": "completed", 00:20:45.951 "digest": "sha384", 00:20:45.951 "dhgroup": "ffdhe2048" 00:20:45.951 } 00:20:45.951 } 00:20:45.951 ]' 00:20:45.951 02:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.951 02:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.951 02:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.951 02:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:45.951 02:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.951 02:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.951 02:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.951 02:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.208 02:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:20:47.142 02:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.142 02:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.142 02:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.142 02:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.142 02:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.142 02:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.142 02:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.142 02:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:47.142 02:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:47.400 02:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:47.400 02:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.400 02:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:47.400 02:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:47.400 02:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:47.400 02:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.400 02:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.400 02:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.400 02:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.401 02:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.401 02:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.401 02:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.969 00:20:47.969 02:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.969 02:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.969 02:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.969 02:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.969 02:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.969 02:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.969 02:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.227 02:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.227 02:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.227 { 00:20:48.227 "cntlid": 65, 00:20:48.227 "qid": 0, 00:20:48.227 "state": "enabled", 00:20:48.227 "thread": "nvmf_tgt_poll_group_000", 00:20:48.227 "listen_address": { 00:20:48.227 "trtype": "TCP", 00:20:48.227 "adrfam": "IPv4", 00:20:48.227 "traddr": "10.0.0.2", 00:20:48.227 "trsvcid": "4420" 00:20:48.227 }, 00:20:48.227 "peer_address": { 00:20:48.227 "trtype": "TCP", 00:20:48.227 "adrfam": "IPv4", 00:20:48.227 "traddr": "10.0.0.1", 00:20:48.227 "trsvcid": "50404" 00:20:48.227 }, 00:20:48.227 "auth": { 00:20:48.227 "state": "completed", 00:20:48.227 "digest": "sha384", 00:20:48.227 "dhgroup": "ffdhe3072" 00:20:48.227 } 00:20:48.227 } 00:20:48.227 ]' 00:20:48.227 02:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.227 02:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.227 02:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.227 02:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:48.227 02:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.227 02:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.227 02:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.227 02:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.485 02:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:20:49.421 02:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.421 02:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.421 02:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.421 02:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.421 02:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.421 02:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.421 02:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:49.421 02:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:49.679 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:49.679 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.679 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:49.679 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:49.679 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:49.679 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.679 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.679 02:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.679 02:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.679 02:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.679 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.679 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.936 00:20:49.936 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.936 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.936 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.193 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.193 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.193 02:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.193 02:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.193 02:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.193 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.193 { 00:20:50.193 "cntlid": 67, 00:20:50.193 "qid": 0, 00:20:50.193 "state": "enabled", 00:20:50.193 "thread": "nvmf_tgt_poll_group_000", 00:20:50.193 "listen_address": { 00:20:50.193 "trtype": "TCP", 00:20:50.193 "adrfam": "IPv4", 00:20:50.193 "traddr": "10.0.0.2", 00:20:50.193 "trsvcid": "4420" 00:20:50.193 }, 00:20:50.193 "peer_address": { 00:20:50.193 "trtype": "TCP", 00:20:50.193 "adrfam": "IPv4", 00:20:50.193 "traddr": "10.0.0.1", 00:20:50.193 "trsvcid": "50426" 00:20:50.193 }, 00:20:50.193 "auth": { 00:20:50.193 "state": "completed", 00:20:50.193 "digest": "sha384", 00:20:50.193 "dhgroup": "ffdhe3072" 00:20:50.193 } 00:20:50.193 } 00:20:50.193 ]' 00:20:50.193 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.451 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.451 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.451 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:50.451 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.451 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.451 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.451 02:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.709 02:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:20:51.648 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.648 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.648 02:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.648 02:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.648 02:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.648 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.648 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:51.648 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:51.907 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:51.907 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.907 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.907 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:51.907 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:51.907 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.907 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.907 02:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.907 02:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.907 02:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.907 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.907 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.165 00:20:52.165 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.165 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.165 02:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.423 02:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.423 02:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.423 02:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.423 02:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.423 02:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.423 02:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.423 { 00:20:52.423 "cntlid": 69, 00:20:52.423 "qid": 0, 00:20:52.423 "state": "enabled", 00:20:52.423 "thread": "nvmf_tgt_poll_group_000", 00:20:52.423 "listen_address": { 00:20:52.423 "trtype": "TCP", 00:20:52.423 "adrfam": "IPv4", 00:20:52.423 "traddr": "10.0.0.2", 00:20:52.423 "trsvcid": "4420" 00:20:52.423 }, 00:20:52.423 "peer_address": { 00:20:52.423 "trtype": "TCP", 00:20:52.423 "adrfam": "IPv4", 00:20:52.423 "traddr": "10.0.0.1", 00:20:52.423 "trsvcid": "49224" 00:20:52.423 }, 00:20:52.423 "auth": { 00:20:52.423 "state": "completed", 00:20:52.423 "digest": "sha384", 00:20:52.423 "dhgroup": "ffdhe3072" 00:20:52.423 } 00:20:52.423 } 00:20:52.423 ]' 00:20:52.423 02:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.423 02:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.423 02:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.682 02:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:52.682 02:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.682 02:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.682 02:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.682 02:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.941 02:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:20:53.877 02:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.877 02:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.877 02:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.877 02:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.877 02:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.877 02:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.877 02:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.877 02:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:54.135 02:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:54.135 02:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.136 02:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:54.136 02:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:54.136 02:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:54.136 02:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.136 02:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:54.136 02:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.136 02:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.136 02:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.136 02:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.136 02:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.395 00:20:54.653 02:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.653 02:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.653 02:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.912 02:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.912 02:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.912 02:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.912 02:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.912 02:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.912 02:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.912 { 00:20:54.912 "cntlid": 71, 00:20:54.912 "qid": 0, 00:20:54.912 "state": "enabled", 00:20:54.912 "thread": "nvmf_tgt_poll_group_000", 00:20:54.912 "listen_address": { 00:20:54.912 "trtype": "TCP", 00:20:54.912 "adrfam": "IPv4", 00:20:54.912 "traddr": "10.0.0.2", 00:20:54.912 "trsvcid": "4420" 00:20:54.912 }, 00:20:54.912 "peer_address": { 00:20:54.912 "trtype": "TCP", 00:20:54.912 "adrfam": "IPv4", 00:20:54.912 "traddr": "10.0.0.1", 00:20:54.912 "trsvcid": "49246" 00:20:54.912 }, 00:20:54.912 "auth": { 00:20:54.912 "state": "completed", 00:20:54.912 "digest": "sha384", 00:20:54.912 "dhgroup": "ffdhe3072" 00:20:54.912 } 00:20:54.912 } 00:20:54.912 ]' 00:20:54.912 02:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.912 02:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.912 02:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.912 02:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.912 02:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.912 02:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.912 02:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.912 02:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.171 02:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:20:56.107 02:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.107 02:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.107 02:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.107 02:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.107 02:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.107 02:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.107 02:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.107 02:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:56.107 02:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:56.365 02:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:56.365 02:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.365 02:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:56.365 02:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:56.365 02:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:56.365 02:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.365 02:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.365 02:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.365 02:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.365 02:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.365 02:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.365 02:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.930 00:20:56.930 02:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.930 02:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.930 02:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.930 02:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.930 02:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.930 02:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.930 02:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.930 02:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.930 02:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.930 { 00:20:56.930 "cntlid": 73, 00:20:56.930 "qid": 0, 00:20:56.930 "state": "enabled", 00:20:56.930 "thread": "nvmf_tgt_poll_group_000", 00:20:56.930 "listen_address": { 00:20:56.930 "trtype": "TCP", 00:20:56.930 "adrfam": "IPv4", 00:20:56.930 "traddr": "10.0.0.2", 00:20:56.930 "trsvcid": "4420" 00:20:56.930 }, 00:20:56.930 "peer_address": { 00:20:56.930 "trtype": "TCP", 00:20:56.930 "adrfam": "IPv4", 00:20:56.930 "traddr": "10.0.0.1", 00:20:56.930 "trsvcid": "49280" 00:20:56.930 }, 00:20:56.930 "auth": { 00:20:56.930 "state": "completed", 00:20:56.930 "digest": "sha384", 00:20:56.930 "dhgroup": "ffdhe4096" 00:20:56.930 } 00:20:56.930 } 00:20:56.930 ]' 00:20:56.930 02:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.187 02:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.187 02:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.187 02:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:57.187 02:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.187 02:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.187 02:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.187 02:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.445 02:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:20:58.432 02:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.432 02:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.432 02:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.432 02:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.432 02:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.432 02:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.432 02:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:58.432 02:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:58.714 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:58.714 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.714 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.714 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:58.714 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:58.714 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.714 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.714 02:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.714 02:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.714 02:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.714 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.714 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.972 00:20:58.972 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.972 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.972 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.230 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.230 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.230 02:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.230 02:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.230 02:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.230 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.230 { 00:20:59.230 "cntlid": 75, 00:20:59.230 "qid": 0, 00:20:59.230 "state": "enabled", 00:20:59.230 "thread": "nvmf_tgt_poll_group_000", 00:20:59.230 "listen_address": { 00:20:59.230 "trtype": "TCP", 00:20:59.230 "adrfam": "IPv4", 00:20:59.230 "traddr": "10.0.0.2", 00:20:59.230 "trsvcid": "4420" 00:20:59.230 }, 00:20:59.230 "peer_address": { 00:20:59.230 "trtype": "TCP", 00:20:59.230 "adrfam": "IPv4", 00:20:59.230 "traddr": "10.0.0.1", 00:20:59.230 "trsvcid": "49302" 00:20:59.230 }, 00:20:59.230 "auth": { 00:20:59.230 "state": "completed", 00:20:59.230 "digest": "sha384", 00:20:59.230 "dhgroup": "ffdhe4096" 00:20:59.230 } 00:20:59.230 } 00:20:59.230 ]' 00:20:59.230 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.230 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.230 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.230 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:59.230 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.487 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.487 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.488 02:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.745 02:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:21:00.681 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.681 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.681 02:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.681 02:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.681 02:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.681 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.681 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.681 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.938 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:00.938 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.938 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:00.938 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:00.938 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:00.938 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.938 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.938 02:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.938 02:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.938 02:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.938 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.938 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.197 00:21:01.455 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.455 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.455 02:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.455 02:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.455 02:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.455 02:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.455 02:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.455 02:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.455 02:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.455 { 00:21:01.455 "cntlid": 77, 00:21:01.455 "qid": 0, 00:21:01.455 "state": "enabled", 00:21:01.455 "thread": "nvmf_tgt_poll_group_000", 00:21:01.455 "listen_address": { 00:21:01.455 "trtype": "TCP", 00:21:01.455 "adrfam": "IPv4", 00:21:01.455 "traddr": "10.0.0.2", 00:21:01.455 "trsvcid": "4420" 00:21:01.455 }, 00:21:01.455 "peer_address": { 00:21:01.455 "trtype": "TCP", 00:21:01.455 "adrfam": "IPv4", 00:21:01.455 "traddr": "10.0.0.1", 00:21:01.455 "trsvcid": "49338" 00:21:01.455 }, 00:21:01.455 "auth": { 00:21:01.455 "state": "completed", 00:21:01.455 "digest": "sha384", 00:21:01.455 "dhgroup": "ffdhe4096" 00:21:01.455 } 00:21:01.455 } 00:21:01.455 ]' 00:21:01.712 02:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.712 02:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.712 02:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.712 02:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:01.712 02:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.712 02:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.712 02:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.712 02:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.970 02:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:21:02.909 02:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.909 02:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.909 02:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.909 02:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.909 02:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.909 02:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.909 02:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.909 02:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.168 02:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:03.168 02:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.168 02:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:03.168 02:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:03.168 02:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:03.168 02:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.168 02:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:03.168 02:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.168 02:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.168 02:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.168 02:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:03.168 02:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:03.733 00:21:03.733 02:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.733 02:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.733 02:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.733 02:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.733 02:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.733 02:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.733 02:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.733 02:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.733 02:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.733 { 00:21:03.733 "cntlid": 79, 00:21:03.733 "qid": 0, 00:21:03.733 "state": "enabled", 00:21:03.733 "thread": "nvmf_tgt_poll_group_000", 00:21:03.733 "listen_address": { 00:21:03.733 "trtype": "TCP", 00:21:03.733 "adrfam": "IPv4", 00:21:03.733 "traddr": "10.0.0.2", 00:21:03.733 "trsvcid": "4420" 00:21:03.733 }, 00:21:03.733 "peer_address": { 00:21:03.733 "trtype": "TCP", 00:21:03.733 "adrfam": "IPv4", 00:21:03.733 "traddr": "10.0.0.1", 00:21:03.733 "trsvcid": "59264" 00:21:03.733 }, 00:21:03.733 "auth": { 00:21:03.733 "state": "completed", 00:21:03.733 "digest": "sha384", 00:21:03.733 "dhgroup": "ffdhe4096" 00:21:03.733 } 00:21:03.733 } 00:21:03.733 ]' 00:21:03.733 02:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.991 02:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.991 02:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.991 02:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:03.991 02:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.991 02:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.991 02:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.991 02:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.249 02:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:21:05.188 02:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.188 02:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.188 02:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.188 02:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.188 02:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.188 02:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.188 02:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.188 02:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:05.188 02:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:05.447 02:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:05.447 02:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.447 02:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:05.447 02:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:05.447 02:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:05.447 02:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.447 02:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.447 02:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.447 02:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.447 02:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.447 02:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.447 02:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.014 00:21:06.014 02:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.014 02:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.014 02:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.272 02:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.272 02:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.272 02:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.272 02:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.272 02:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.272 02:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.272 { 00:21:06.272 "cntlid": 81, 00:21:06.272 "qid": 0, 00:21:06.272 "state": "enabled", 00:21:06.272 "thread": "nvmf_tgt_poll_group_000", 00:21:06.272 "listen_address": { 00:21:06.272 "trtype": "TCP", 00:21:06.272 "adrfam": "IPv4", 00:21:06.272 "traddr": "10.0.0.2", 00:21:06.272 "trsvcid": "4420" 00:21:06.272 }, 00:21:06.272 "peer_address": { 00:21:06.272 "trtype": "TCP", 00:21:06.272 "adrfam": "IPv4", 00:21:06.272 "traddr": "10.0.0.1", 00:21:06.272 "trsvcid": "59278" 00:21:06.272 }, 00:21:06.272 "auth": { 00:21:06.272 "state": "completed", 00:21:06.272 "digest": "sha384", 00:21:06.272 "dhgroup": "ffdhe6144" 00:21:06.272 } 00:21:06.272 } 00:21:06.272 ]' 00:21:06.272 02:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.272 02:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.272 02:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.272 02:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:06.272 02:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.531 02:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.531 02:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.531 02:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.789 02:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:21:07.720 02:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.720 02:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.720 02:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.720 02:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.720 02:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.720 02:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.720 02:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.720 02:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.977 02:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:07.977 02:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.977 02:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:07.977 02:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:07.977 02:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:07.977 02:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.977 02:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.977 02:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.977 02:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.977 02:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.977 02:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.977 02:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.542 00:21:08.542 02:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.542 02:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.543 02:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.800 02:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.800 02:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.800 02:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.800 02:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.800 02:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.800 02:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.800 { 00:21:08.800 "cntlid": 83, 00:21:08.800 "qid": 0, 00:21:08.800 "state": "enabled", 00:21:08.800 "thread": "nvmf_tgt_poll_group_000", 00:21:08.800 "listen_address": { 00:21:08.800 "trtype": "TCP", 00:21:08.800 "adrfam": "IPv4", 00:21:08.800 "traddr": "10.0.0.2", 00:21:08.800 "trsvcid": "4420" 00:21:08.800 }, 00:21:08.800 "peer_address": { 00:21:08.800 "trtype": "TCP", 00:21:08.800 "adrfam": "IPv4", 00:21:08.800 "traddr": "10.0.0.1", 00:21:08.800 "trsvcid": "59310" 00:21:08.800 }, 00:21:08.800 "auth": { 00:21:08.800 "state": "completed", 00:21:08.800 "digest": "sha384", 00:21:08.800 "dhgroup": "ffdhe6144" 00:21:08.800 } 00:21:08.800 } 00:21:08.800 ]' 00:21:08.800 02:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.800 02:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.800 02:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.800 02:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.800 02:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.059 02:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.059 02:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.059 02:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.059 02:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:21:10.435 02:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.435 02:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.435 02:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.435 02:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.435 02:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.435 02:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.435 02:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.435 02:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.435 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:10.435 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.435 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:10.435 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:10.435 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:10.435 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.435 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.435 02:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.435 02:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.435 02:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.435 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.435 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.000 00:21:11.000 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.000 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.000 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.258 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.258 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.258 02:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.258 02:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.258 02:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.258 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.258 { 00:21:11.258 "cntlid": 85, 00:21:11.258 "qid": 0, 00:21:11.258 "state": "enabled", 00:21:11.258 "thread": "nvmf_tgt_poll_group_000", 00:21:11.258 "listen_address": { 00:21:11.258 "trtype": "TCP", 00:21:11.258 "adrfam": "IPv4", 00:21:11.258 "traddr": "10.0.0.2", 00:21:11.258 "trsvcid": "4420" 00:21:11.258 }, 00:21:11.258 "peer_address": { 00:21:11.258 "trtype": "TCP", 00:21:11.258 "adrfam": "IPv4", 00:21:11.258 "traddr": "10.0.0.1", 00:21:11.258 "trsvcid": "59342" 00:21:11.258 }, 00:21:11.258 "auth": { 00:21:11.258 "state": "completed", 00:21:11.258 "digest": "sha384", 00:21:11.258 "dhgroup": "ffdhe6144" 00:21:11.258 } 00:21:11.258 } 00:21:11.258 ]' 00:21:11.258 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.258 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.258 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.516 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:11.516 02:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.516 02:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.516 02:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.516 02:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.773 02:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:21:12.739 02:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.739 02:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.739 02:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.739 02:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.739 02:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.739 02:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.739 02:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:12.739 02:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:12.998 02:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:12.998 02:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.998 02:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:12.998 02:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:12.998 02:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:12.998 02:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.999 02:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:12.999 02:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.999 02:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.999 02:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.999 02:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.999 02:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.567 00:21:13.567 02:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.567 02:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.567 02:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.825 02:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.825 02:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.825 02:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.825 02:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.825 02:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.825 02:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.825 { 00:21:13.825 "cntlid": 87, 00:21:13.825 "qid": 0, 00:21:13.825 "state": "enabled", 00:21:13.825 "thread": "nvmf_tgt_poll_group_000", 00:21:13.825 "listen_address": { 00:21:13.825 "trtype": "TCP", 00:21:13.825 "adrfam": "IPv4", 00:21:13.825 "traddr": "10.0.0.2", 00:21:13.825 "trsvcid": "4420" 00:21:13.825 }, 00:21:13.825 "peer_address": { 00:21:13.825 "trtype": "TCP", 00:21:13.825 "adrfam": "IPv4", 00:21:13.825 "traddr": "10.0.0.1", 00:21:13.825 "trsvcid": "58204" 00:21:13.825 }, 00:21:13.825 "auth": { 00:21:13.825 "state": "completed", 00:21:13.825 "digest": "sha384", 00:21:13.825 "dhgroup": "ffdhe6144" 00:21:13.825 } 00:21:13.825 } 00:21:13.825 ]' 00:21:13.825 02:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.825 02:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.825 02:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.825 02:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:13.825 02:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.825 02:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.825 02:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.825 02:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.083 02:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:21:15.018 02:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.018 02:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.018 02:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.018 02:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.018 02:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.018 02:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.018 02:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.018 02:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:15.018 02:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:15.277 02:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:15.277 02:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.277 02:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:15.277 02:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:15.277 02:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:15.277 02:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.277 02:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.277 02:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.277 02:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.277 02:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.277 02:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.277 02:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.216 00:21:16.216 02:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.216 02:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.216 02:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.474 02:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.474 02:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.474 02:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.474 02:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.474 02:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.474 02:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.474 { 00:21:16.474 "cntlid": 89, 00:21:16.474 "qid": 0, 00:21:16.474 "state": "enabled", 00:21:16.474 "thread": "nvmf_tgt_poll_group_000", 00:21:16.474 "listen_address": { 00:21:16.474 "trtype": "TCP", 00:21:16.474 "adrfam": "IPv4", 00:21:16.474 "traddr": "10.0.0.2", 00:21:16.474 "trsvcid": "4420" 00:21:16.474 }, 00:21:16.474 "peer_address": { 00:21:16.474 "trtype": "TCP", 00:21:16.474 "adrfam": "IPv4", 00:21:16.474 "traddr": "10.0.0.1", 00:21:16.474 "trsvcid": "58228" 00:21:16.474 }, 00:21:16.474 "auth": { 00:21:16.474 "state": "completed", 00:21:16.474 "digest": "sha384", 00:21:16.474 "dhgroup": "ffdhe8192" 00:21:16.474 } 00:21:16.474 } 00:21:16.474 ]' 00:21:16.474 02:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.475 02:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.475 02:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.475 02:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:16.475 02:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.732 02:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.732 02:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.732 02:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.989 02:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:21:17.922 02:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.922 02:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.922 02:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.922 02:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.922 02:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.922 02:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.922 02:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.922 02:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:18.189 02:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:18.189 02:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.189 02:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:18.189 02:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:18.189 02:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:18.189 02:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.189 02:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.189 02:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.189 02:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.189 02:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.189 02:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.189 02:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.123 00:21:19.123 02:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.123 02:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.123 02:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.380 02:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.380 02:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.380 02:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.380 02:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.380 02:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.380 02:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.380 { 00:21:19.380 "cntlid": 91, 00:21:19.380 "qid": 0, 00:21:19.380 "state": "enabled", 00:21:19.380 "thread": "nvmf_tgt_poll_group_000", 00:21:19.380 "listen_address": { 00:21:19.380 "trtype": "TCP", 00:21:19.380 "adrfam": "IPv4", 00:21:19.380 "traddr": "10.0.0.2", 00:21:19.380 "trsvcid": "4420" 00:21:19.380 }, 00:21:19.380 "peer_address": { 00:21:19.380 "trtype": "TCP", 00:21:19.380 "adrfam": "IPv4", 00:21:19.380 "traddr": "10.0.0.1", 00:21:19.380 "trsvcid": "58262" 00:21:19.380 }, 00:21:19.380 "auth": { 00:21:19.380 "state": "completed", 00:21:19.380 "digest": "sha384", 00:21:19.380 "dhgroup": "ffdhe8192" 00:21:19.380 } 00:21:19.380 } 00:21:19.380 ]' 00:21:19.380 02:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.380 02:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.380 02:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.380 02:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.380 02:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.380 02:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.380 02:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.380 02:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.637 02:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:21:20.571 02:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.571 02:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.571 02:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.571 02:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.571 02:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.571 02:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.571 02:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:20.571 02:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:20.829 02:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:20.829 02:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.829 02:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:20.829 02:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:20.830 02:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:20.830 02:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.830 02:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.830 02:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.830 02:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.830 02:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.830 02:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.830 02:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.763 00:21:21.763 02:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.763 02:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.763 02:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.021 02:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.021 02:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.021 02:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.021 02:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.021 02:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.021 02:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.021 { 00:21:22.021 "cntlid": 93, 00:21:22.021 "qid": 0, 00:21:22.021 "state": "enabled", 00:21:22.021 "thread": "nvmf_tgt_poll_group_000", 00:21:22.021 "listen_address": { 00:21:22.021 "trtype": "TCP", 00:21:22.021 "adrfam": "IPv4", 00:21:22.021 "traddr": "10.0.0.2", 00:21:22.021 "trsvcid": "4420" 00:21:22.021 }, 00:21:22.021 "peer_address": { 00:21:22.021 "trtype": "TCP", 00:21:22.021 "adrfam": "IPv4", 00:21:22.021 "traddr": "10.0.0.1", 00:21:22.021 "trsvcid": "58284" 00:21:22.021 }, 00:21:22.021 "auth": { 00:21:22.021 "state": "completed", 00:21:22.021 "digest": "sha384", 00:21:22.021 "dhgroup": "ffdhe8192" 00:21:22.021 } 00:21:22.021 } 00:21:22.021 ]' 00:21:22.021 02:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.021 02:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.021 02:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.279 02:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:22.279 02:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.279 02:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.279 02:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.279 02:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.537 02:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:21:23.471 02:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.471 02:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.471 02:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.471 02:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.471 02:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.471 02:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.471 02:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:23.471 02:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:23.729 02:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:23.729 02:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.729 02:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:23.729 02:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:23.729 02:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:23.729 02:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.729 02:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:23.729 02:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.729 02:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.729 02:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.729 02:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.729 02:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:24.666 00:21:24.666 02:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.666 02:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.666 02:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.924 02:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.924 02:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.924 02:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.924 02:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.924 02:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.924 02:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.924 { 00:21:24.924 "cntlid": 95, 00:21:24.924 "qid": 0, 00:21:24.924 "state": "enabled", 00:21:24.924 "thread": "nvmf_tgt_poll_group_000", 00:21:24.924 "listen_address": { 00:21:24.924 "trtype": "TCP", 00:21:24.924 "adrfam": "IPv4", 00:21:24.924 "traddr": "10.0.0.2", 00:21:24.924 "trsvcid": "4420" 00:21:24.924 }, 00:21:24.924 "peer_address": { 00:21:24.924 "trtype": "TCP", 00:21:24.924 "adrfam": "IPv4", 00:21:24.924 "traddr": "10.0.0.1", 00:21:24.924 "trsvcid": "36364" 00:21:24.924 }, 00:21:24.924 "auth": { 00:21:24.924 "state": "completed", 00:21:24.924 "digest": "sha384", 00:21:24.924 "dhgroup": "ffdhe8192" 00:21:24.924 } 00:21:24.924 } 00:21:24.924 ]' 00:21:24.924 02:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.924 02:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.924 02:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.924 02:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:24.924 02:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.924 02:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.924 02:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.924 02:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.182 02:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:21:26.117 02:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.117 02:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.117 02:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.117 02:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.117 02:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.117 02:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:26.117 02:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.117 02:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.117 02:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:26.117 02:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:26.375 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:26.375 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.375 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.375 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:26.375 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:26.375 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.375 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.375 02:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.375 02:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.375 02:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.375 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.375 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.695 00:21:26.695 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.695 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.695 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.982 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.982 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.982 02:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.982 02:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.982 02:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.982 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.982 { 00:21:26.982 "cntlid": 97, 00:21:26.982 "qid": 0, 00:21:26.982 "state": "enabled", 00:21:26.982 "thread": "nvmf_tgt_poll_group_000", 00:21:26.982 "listen_address": { 00:21:26.982 "trtype": "TCP", 00:21:26.982 "adrfam": "IPv4", 00:21:26.982 "traddr": "10.0.0.2", 00:21:26.982 "trsvcid": "4420" 00:21:26.982 }, 00:21:26.982 "peer_address": { 00:21:26.982 "trtype": "TCP", 00:21:26.982 "adrfam": "IPv4", 00:21:26.982 "traddr": "10.0.0.1", 00:21:26.982 "trsvcid": "36386" 00:21:26.982 }, 00:21:26.982 "auth": { 00:21:26.982 "state": "completed", 00:21:26.982 "digest": "sha512", 00:21:26.982 "dhgroup": "null" 00:21:26.982 } 00:21:26.982 } 00:21:26.982 ]' 00:21:26.982 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.982 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.982 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.240 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:27.240 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.240 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.240 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.240 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.499 02:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:21:28.430 02:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.430 02:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.430 02:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.430 02:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.430 02:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.430 02:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.430 02:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:28.430 02:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:28.688 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:28.688 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.688 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.688 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:28.688 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:28.688 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.688 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.688 02:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.688 02:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.688 02:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.688 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.688 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.969 00:21:28.969 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.969 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.969 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.227 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.227 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.227 02:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.227 02:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.227 02:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.227 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.227 { 00:21:29.227 "cntlid": 99, 00:21:29.227 "qid": 0, 00:21:29.227 "state": "enabled", 00:21:29.227 "thread": "nvmf_tgt_poll_group_000", 00:21:29.227 "listen_address": { 00:21:29.227 "trtype": "TCP", 00:21:29.227 "adrfam": "IPv4", 00:21:29.227 "traddr": "10.0.0.2", 00:21:29.227 "trsvcid": "4420" 00:21:29.227 }, 00:21:29.227 "peer_address": { 00:21:29.227 "trtype": "TCP", 00:21:29.227 "adrfam": "IPv4", 00:21:29.227 "traddr": "10.0.0.1", 00:21:29.227 "trsvcid": "36428" 00:21:29.227 }, 00:21:29.227 "auth": { 00:21:29.227 "state": "completed", 00:21:29.227 "digest": "sha512", 00:21:29.227 "dhgroup": "null" 00:21:29.227 } 00:21:29.227 } 00:21:29.227 ]' 00:21:29.227 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.227 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.227 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.227 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:29.227 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.227 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.227 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.227 02:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.486 02:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:21:30.421 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.680 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.680 02:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.680 02:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.680 02:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.680 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.680 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:30.680 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:30.939 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:30.939 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.939 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.939 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:30.939 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:30.939 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.939 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.939 02:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.939 02:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.939 02:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.939 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.939 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.197 00:21:31.197 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.197 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.197 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.454 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.454 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.454 02:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.454 02:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.454 02:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.454 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.454 { 00:21:31.454 "cntlid": 101, 00:21:31.454 "qid": 0, 00:21:31.454 "state": "enabled", 00:21:31.454 "thread": "nvmf_tgt_poll_group_000", 00:21:31.454 "listen_address": { 00:21:31.454 "trtype": "TCP", 00:21:31.454 "adrfam": "IPv4", 00:21:31.454 "traddr": "10.0.0.2", 00:21:31.454 "trsvcid": "4420" 00:21:31.454 }, 00:21:31.454 "peer_address": { 00:21:31.454 "trtype": "TCP", 00:21:31.454 "adrfam": "IPv4", 00:21:31.454 "traddr": "10.0.0.1", 00:21:31.454 "trsvcid": "36458" 00:21:31.454 }, 00:21:31.454 "auth": { 00:21:31.454 "state": "completed", 00:21:31.454 "digest": "sha512", 00:21:31.454 "dhgroup": "null" 00:21:31.454 } 00:21:31.454 } 00:21:31.454 ]' 00:21:31.454 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.454 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.454 02:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.454 02:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:31.454 02:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.454 02:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.454 02:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.454 02:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.712 02:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:21:32.647 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.647 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.647 02:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.647 02:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.647 02:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.647 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.647 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:32.647 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:32.905 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:32.905 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.905 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:32.905 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:32.905 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:32.905 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.905 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:32.905 02:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.905 02:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.905 02:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.905 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:32.905 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.163 00:21:33.163 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.163 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.163 02:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.420 02:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.420 02:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.420 02:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.420 02:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.420 02:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.420 02:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.420 { 00:21:33.420 "cntlid": 103, 00:21:33.420 "qid": 0, 00:21:33.420 "state": "enabled", 00:21:33.420 "thread": "nvmf_tgt_poll_group_000", 00:21:33.420 "listen_address": { 00:21:33.420 "trtype": "TCP", 00:21:33.420 "adrfam": "IPv4", 00:21:33.420 "traddr": "10.0.0.2", 00:21:33.420 "trsvcid": "4420" 00:21:33.420 }, 00:21:33.420 "peer_address": { 00:21:33.420 "trtype": "TCP", 00:21:33.420 "adrfam": "IPv4", 00:21:33.420 "traddr": "10.0.0.1", 00:21:33.420 "trsvcid": "49340" 00:21:33.420 }, 00:21:33.420 "auth": { 00:21:33.420 "state": "completed", 00:21:33.420 "digest": "sha512", 00:21:33.420 "dhgroup": "null" 00:21:33.420 } 00:21:33.420 } 00:21:33.420 ]' 00:21:33.420 02:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.678 02:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.678 02:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.678 02:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:33.678 02:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.678 02:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.678 02:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.678 02:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.935 02:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:21:34.871 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.871 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.871 02:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.871 02:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.871 02:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.871 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.871 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.871 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:34.871 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:35.130 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:35.130 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.130 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.130 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:35.130 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:35.130 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.130 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.130 02:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.130 02:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.130 02:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.130 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.130 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.388 00:21:35.388 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.388 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.388 02:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.645 02:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.645 02:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.645 02:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.645 02:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.645 02:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.645 02:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.645 { 00:21:35.645 "cntlid": 105, 00:21:35.645 "qid": 0, 00:21:35.645 "state": "enabled", 00:21:35.645 "thread": "nvmf_tgt_poll_group_000", 00:21:35.645 "listen_address": { 00:21:35.645 "trtype": "TCP", 00:21:35.645 "adrfam": "IPv4", 00:21:35.645 "traddr": "10.0.0.2", 00:21:35.645 "trsvcid": "4420" 00:21:35.645 }, 00:21:35.645 "peer_address": { 00:21:35.645 "trtype": "TCP", 00:21:35.645 "adrfam": "IPv4", 00:21:35.645 "traddr": "10.0.0.1", 00:21:35.645 "trsvcid": "49364" 00:21:35.645 }, 00:21:35.645 "auth": { 00:21:35.645 "state": "completed", 00:21:35.645 "digest": "sha512", 00:21:35.645 "dhgroup": "ffdhe2048" 00:21:35.645 } 00:21:35.645 } 00:21:35.645 ]' 00:21:35.645 02:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.645 02:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.645 02:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.903 02:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:35.903 02:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.903 02:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.903 02:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.903 02:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.163 02:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:21:37.100 02:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.100 02:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.100 02:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.100 02:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.100 02:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.100 02:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.100 02:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:37.100 02:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:37.358 02:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:37.358 02:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.358 02:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.358 02:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:37.358 02:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:37.358 02:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.358 02:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.358 02:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.358 02:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.358 02:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.358 02:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.358 02:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.616 00:21:37.616 02:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.616 02:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.616 02:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.874 02:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.874 02:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.874 02:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.874 02:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.874 02:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.874 02:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.874 { 00:21:37.874 "cntlid": 107, 00:21:37.874 "qid": 0, 00:21:37.874 "state": "enabled", 00:21:37.874 "thread": "nvmf_tgt_poll_group_000", 00:21:37.874 "listen_address": { 00:21:37.874 "trtype": "TCP", 00:21:37.874 "adrfam": "IPv4", 00:21:37.874 "traddr": "10.0.0.2", 00:21:37.874 "trsvcid": "4420" 00:21:37.874 }, 00:21:37.874 "peer_address": { 00:21:37.874 "trtype": "TCP", 00:21:37.874 "adrfam": "IPv4", 00:21:37.874 "traddr": "10.0.0.1", 00:21:37.874 "trsvcid": "49376" 00:21:37.874 }, 00:21:37.874 "auth": { 00:21:37.874 "state": "completed", 00:21:37.874 "digest": "sha512", 00:21:37.874 "dhgroup": "ffdhe2048" 00:21:37.874 } 00:21:37.874 } 00:21:37.874 ]' 00:21:37.874 02:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.874 02:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.874 02:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.874 02:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:37.874 02:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.133 02:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.133 02:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.133 02:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.390 02:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:21:39.324 02:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.324 02:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.324 02:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.324 02:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.324 02:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.324 02:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.324 02:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:39.324 02:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:39.582 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:39.582 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.582 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:39.582 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:39.582 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:39.582 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.582 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.582 02:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.582 02:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.582 02:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.582 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.582 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.840 00:21:39.840 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.840 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.840 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.103 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.103 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.103 02:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.103 02:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.103 02:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.103 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.103 { 00:21:40.103 "cntlid": 109, 00:21:40.103 "qid": 0, 00:21:40.103 "state": "enabled", 00:21:40.103 "thread": "nvmf_tgt_poll_group_000", 00:21:40.103 "listen_address": { 00:21:40.103 "trtype": "TCP", 00:21:40.103 "adrfam": "IPv4", 00:21:40.103 "traddr": "10.0.0.2", 00:21:40.103 "trsvcid": "4420" 00:21:40.103 }, 00:21:40.103 "peer_address": { 00:21:40.103 "trtype": "TCP", 00:21:40.103 "adrfam": "IPv4", 00:21:40.103 "traddr": "10.0.0.1", 00:21:40.103 "trsvcid": "49408" 00:21:40.103 }, 00:21:40.103 "auth": { 00:21:40.103 "state": "completed", 00:21:40.103 "digest": "sha512", 00:21:40.103 "dhgroup": "ffdhe2048" 00:21:40.103 } 00:21:40.103 } 00:21:40.103 ]' 00:21:40.103 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.103 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.103 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.405 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:40.405 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.405 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.405 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.405 02:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.663 02:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:21:41.600 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.601 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.601 02:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.601 02:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.601 02:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.601 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.601 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:41.601 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:41.859 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:41.859 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.859 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:41.859 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:41.859 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:41.859 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.859 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:41.859 02:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.859 02:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.859 02:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.859 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:41.859 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.117 00:21:42.117 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.117 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.117 02:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.376 02:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.376 02:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.376 02:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.376 02:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.376 02:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.376 02:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.376 { 00:21:42.376 "cntlid": 111, 00:21:42.376 "qid": 0, 00:21:42.376 "state": "enabled", 00:21:42.376 "thread": "nvmf_tgt_poll_group_000", 00:21:42.376 "listen_address": { 00:21:42.376 "trtype": "TCP", 00:21:42.376 "adrfam": "IPv4", 00:21:42.376 "traddr": "10.0.0.2", 00:21:42.376 "trsvcid": "4420" 00:21:42.376 }, 00:21:42.376 "peer_address": { 00:21:42.376 "trtype": "TCP", 00:21:42.376 "adrfam": "IPv4", 00:21:42.376 "traddr": "10.0.0.1", 00:21:42.376 "trsvcid": "39290" 00:21:42.376 }, 00:21:42.376 "auth": { 00:21:42.376 "state": "completed", 00:21:42.376 "digest": "sha512", 00:21:42.376 "dhgroup": "ffdhe2048" 00:21:42.376 } 00:21:42.376 } 00:21:42.376 ]' 00:21:42.376 02:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.376 02:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.376 02:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.634 02:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:42.634 02:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.634 02:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.634 02:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.634 02:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.894 02:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:21:43.828 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.828 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.828 02:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.828 02:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.829 02:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.829 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.829 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.829 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:43.829 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:44.087 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:44.087 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.087 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:44.087 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:44.087 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:44.087 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.087 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.087 02:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.087 02:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.087 02:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.087 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.087 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.345 00:21:44.345 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.345 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.345 02:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.603 02:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.603 02:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.603 02:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.603 02:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.603 02:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.603 02:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.603 { 00:21:44.603 "cntlid": 113, 00:21:44.603 "qid": 0, 00:21:44.603 "state": "enabled", 00:21:44.603 "thread": "nvmf_tgt_poll_group_000", 00:21:44.603 "listen_address": { 00:21:44.603 "trtype": "TCP", 00:21:44.603 "adrfam": "IPv4", 00:21:44.603 "traddr": "10.0.0.2", 00:21:44.603 "trsvcid": "4420" 00:21:44.603 }, 00:21:44.603 "peer_address": { 00:21:44.603 "trtype": "TCP", 00:21:44.603 "adrfam": "IPv4", 00:21:44.603 "traddr": "10.0.0.1", 00:21:44.603 "trsvcid": "39312" 00:21:44.603 }, 00:21:44.603 "auth": { 00:21:44.603 "state": "completed", 00:21:44.603 "digest": "sha512", 00:21:44.603 "dhgroup": "ffdhe3072" 00:21:44.603 } 00:21:44.603 } 00:21:44.603 ]' 00:21:44.603 02:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.603 02:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.603 02:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.603 02:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:44.603 02:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.862 02:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.862 02:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.862 02:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.120 02:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:21:46.057 02:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.057 02:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.057 02:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.057 02:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.057 02:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.057 02:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.057 02:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:46.057 02:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:46.318 02:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:46.318 02:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.318 02:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:46.318 02:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:46.318 02:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:46.318 02:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.318 02:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.318 02:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.318 02:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.318 02:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.318 02:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.318 02:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.577 00:21:46.577 02:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.577 02:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.577 02:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.836 02:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.836 02:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.836 02:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.836 02:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.836 02:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.836 02:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.836 { 00:21:46.836 "cntlid": 115, 00:21:46.836 "qid": 0, 00:21:46.836 "state": "enabled", 00:21:46.836 "thread": "nvmf_tgt_poll_group_000", 00:21:46.836 "listen_address": { 00:21:46.836 "trtype": "TCP", 00:21:46.836 "adrfam": "IPv4", 00:21:46.836 "traddr": "10.0.0.2", 00:21:46.836 "trsvcid": "4420" 00:21:46.836 }, 00:21:46.836 "peer_address": { 00:21:46.836 "trtype": "TCP", 00:21:46.836 "adrfam": "IPv4", 00:21:46.836 "traddr": "10.0.0.1", 00:21:46.836 "trsvcid": "39352" 00:21:46.836 }, 00:21:46.836 "auth": { 00:21:46.836 "state": "completed", 00:21:46.836 "digest": "sha512", 00:21:46.836 "dhgroup": "ffdhe3072" 00:21:46.836 } 00:21:46.836 } 00:21:46.836 ]' 00:21:46.836 02:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.836 02:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.836 02:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.836 02:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:46.836 02:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.836 02:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.836 02:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.836 02:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.096 02:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:21:48.475 02:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.475 02:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.475 02:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.475 02:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.475 02:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.475 02:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.475 02:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:48.475 02:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:48.475 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:48.475 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.475 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:48.476 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:48.476 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:48.476 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.476 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.476 02:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.476 02:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.476 02:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.476 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.476 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.734 00:21:48.734 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.734 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.734 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.993 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.993 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.993 02:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.993 02:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 02:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.993 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.993 { 00:21:48.993 "cntlid": 117, 00:21:48.993 "qid": 0, 00:21:48.993 "state": "enabled", 00:21:48.993 "thread": "nvmf_tgt_poll_group_000", 00:21:48.993 "listen_address": { 00:21:48.993 "trtype": "TCP", 00:21:48.993 "adrfam": "IPv4", 00:21:48.993 "traddr": "10.0.0.2", 00:21:48.993 "trsvcid": "4420" 00:21:48.993 }, 00:21:48.993 "peer_address": { 00:21:48.993 "trtype": "TCP", 00:21:48.993 "adrfam": "IPv4", 00:21:48.993 "traddr": "10.0.0.1", 00:21:48.993 "trsvcid": "39376" 00:21:48.993 }, 00:21:48.993 "auth": { 00:21:48.993 "state": "completed", 00:21:48.993 "digest": "sha512", 00:21:48.993 "dhgroup": "ffdhe3072" 00:21:48.993 } 00:21:48.993 } 00:21:48.993 ]' 00:21:48.993 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.251 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.251 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.251 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:49.251 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.251 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.251 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.251 02:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.509 02:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:21:50.445 02:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.445 02:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.445 02:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.445 02:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.445 02:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.445 02:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.445 02:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.445 02:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.703 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:50.703 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.703 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:50.703 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:50.703 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:50.703 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.703 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:50.703 02:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.703 02:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.703 02:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.703 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:50.703 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:50.963 00:21:51.222 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.222 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.222 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.481 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.481 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.481 02:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.481 02:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.481 02:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.481 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.481 { 00:21:51.481 "cntlid": 119, 00:21:51.481 "qid": 0, 00:21:51.481 "state": "enabled", 00:21:51.481 "thread": "nvmf_tgt_poll_group_000", 00:21:51.481 "listen_address": { 00:21:51.481 "trtype": "TCP", 00:21:51.481 "adrfam": "IPv4", 00:21:51.481 "traddr": "10.0.0.2", 00:21:51.481 "trsvcid": "4420" 00:21:51.481 }, 00:21:51.481 "peer_address": { 00:21:51.481 "trtype": "TCP", 00:21:51.481 "adrfam": "IPv4", 00:21:51.481 "traddr": "10.0.0.1", 00:21:51.481 "trsvcid": "39406" 00:21:51.481 }, 00:21:51.481 "auth": { 00:21:51.481 "state": "completed", 00:21:51.481 "digest": "sha512", 00:21:51.481 "dhgroup": "ffdhe3072" 00:21:51.481 } 00:21:51.481 } 00:21:51.481 ]' 00:21:51.481 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.481 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.481 02:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.481 02:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:51.481 02:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.481 02:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.481 02:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.481 02:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.739 02:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:21:52.674 02:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.674 02:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.674 02:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.674 02:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.674 02:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.674 02:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:52.674 02:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.674 02:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:52.674 02:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:52.933 02:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:52.933 02:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.933 02:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:52.933 02:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:52.933 02:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:52.933 02:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.933 02:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.933 02:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.933 02:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.933 02:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.192 02:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.192 02:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.450 00:21:53.450 02:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.450 02:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.451 02:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.708 02:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.708 02:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.708 02:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.708 02:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.708 02:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.708 02:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.708 { 00:21:53.708 "cntlid": 121, 00:21:53.708 "qid": 0, 00:21:53.708 "state": "enabled", 00:21:53.708 "thread": "nvmf_tgt_poll_group_000", 00:21:53.708 "listen_address": { 00:21:53.708 "trtype": "TCP", 00:21:53.708 "adrfam": "IPv4", 00:21:53.708 "traddr": "10.0.0.2", 00:21:53.708 "trsvcid": "4420" 00:21:53.708 }, 00:21:53.708 "peer_address": { 00:21:53.708 "trtype": "TCP", 00:21:53.708 "adrfam": "IPv4", 00:21:53.708 "traddr": "10.0.0.1", 00:21:53.708 "trsvcid": "60838" 00:21:53.708 }, 00:21:53.708 "auth": { 00:21:53.708 "state": "completed", 00:21:53.708 "digest": "sha512", 00:21:53.708 "dhgroup": "ffdhe4096" 00:21:53.708 } 00:21:53.708 } 00:21:53.708 ]' 00:21:53.708 02:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.708 02:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.708 02:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.708 02:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:53.708 02:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.995 02:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.995 02:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.995 02:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.995 02:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:21:54.927 02:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.927 02:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.927 02:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.927 02:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.186 02:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.186 02:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.186 02:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:55.186 02:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:55.469 02:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:55.469 02:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.469 02:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:55.469 02:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:55.469 02:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:55.469 02:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.469 02:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.469 02:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.469 02:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.469 02:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.469 02:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.469 02:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.727 00:21:55.727 02:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.727 02:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.727 02:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.985 02:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.985 02:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.985 02:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.985 02:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.985 02:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.985 02:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.985 { 00:21:55.985 "cntlid": 123, 00:21:55.985 "qid": 0, 00:21:55.985 "state": "enabled", 00:21:55.985 "thread": "nvmf_tgt_poll_group_000", 00:21:55.985 "listen_address": { 00:21:55.985 "trtype": "TCP", 00:21:55.985 "adrfam": "IPv4", 00:21:55.985 "traddr": "10.0.0.2", 00:21:55.985 "trsvcid": "4420" 00:21:55.985 }, 00:21:55.985 "peer_address": { 00:21:55.985 "trtype": "TCP", 00:21:55.985 "adrfam": "IPv4", 00:21:55.985 "traddr": "10.0.0.1", 00:21:55.985 "trsvcid": "60872" 00:21:55.985 }, 00:21:55.985 "auth": { 00:21:55.985 "state": "completed", 00:21:55.985 "digest": "sha512", 00:21:55.985 "dhgroup": "ffdhe4096" 00:21:55.985 } 00:21:55.985 } 00:21:55.985 ]' 00:21:55.985 02:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.985 02:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.985 02:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.985 02:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:55.985 02:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.985 02:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.985 02:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.985 02:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.243 02:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:21:57.612 02:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.612 02:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.612 02:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.612 02:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.612 02:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.612 02:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:57.612 02:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:57.612 02:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:57.612 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:57.612 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.612 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.612 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:57.612 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:57.612 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.612 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.612 02:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.612 02:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.612 02:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.612 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.612 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.870 00:21:58.129 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.129 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.129 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.387 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.387 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.387 02:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.387 02:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.387 02:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.387 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.387 { 00:21:58.387 "cntlid": 125, 00:21:58.387 "qid": 0, 00:21:58.387 "state": "enabled", 00:21:58.387 "thread": "nvmf_tgt_poll_group_000", 00:21:58.387 "listen_address": { 00:21:58.387 "trtype": "TCP", 00:21:58.387 "adrfam": "IPv4", 00:21:58.387 "traddr": "10.0.0.2", 00:21:58.387 "trsvcid": "4420" 00:21:58.387 }, 00:21:58.387 "peer_address": { 00:21:58.387 "trtype": "TCP", 00:21:58.387 "adrfam": "IPv4", 00:21:58.387 "traddr": "10.0.0.1", 00:21:58.387 "trsvcid": "60898" 00:21:58.387 }, 00:21:58.387 "auth": { 00:21:58.387 "state": "completed", 00:21:58.387 "digest": "sha512", 00:21:58.387 "dhgroup": "ffdhe4096" 00:21:58.387 } 00:21:58.387 } 00:21:58.387 ]' 00:21:58.387 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.387 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.387 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.387 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:58.388 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.388 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.388 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.388 02:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.645 02:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:21:59.580 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.581 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.581 02:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.581 02:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.581 02:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.581 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.581 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:59.581 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:59.840 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:59.840 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.840 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.840 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:59.840 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:59.840 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.840 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:59.840 02:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.840 02:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.840 02:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.840 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:59.840 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:00.407 00:22:00.407 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.407 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.407 02:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.665 02:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.665 02:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.665 02:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.665 02:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.665 02:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.665 02:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.665 { 00:22:00.665 "cntlid": 127, 00:22:00.665 "qid": 0, 00:22:00.665 "state": "enabled", 00:22:00.665 "thread": "nvmf_tgt_poll_group_000", 00:22:00.665 "listen_address": { 00:22:00.665 "trtype": "TCP", 00:22:00.665 "adrfam": "IPv4", 00:22:00.665 "traddr": "10.0.0.2", 00:22:00.665 "trsvcid": "4420" 00:22:00.665 }, 00:22:00.665 "peer_address": { 00:22:00.665 "trtype": "TCP", 00:22:00.665 "adrfam": "IPv4", 00:22:00.665 "traddr": "10.0.0.1", 00:22:00.665 "trsvcid": "60932" 00:22:00.665 }, 00:22:00.665 "auth": { 00:22:00.665 "state": "completed", 00:22:00.665 "digest": "sha512", 00:22:00.665 "dhgroup": "ffdhe4096" 00:22:00.665 } 00:22:00.665 } 00:22:00.665 ]' 00:22:00.665 02:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.665 02:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.665 02:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.665 02:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:00.665 02:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.665 02:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.665 02:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.665 02:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.923 02:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:22:01.860 02:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.860 02:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.860 02:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.860 02:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.860 02:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.860 02:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:01.860 02:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.860 02:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:01.860 02:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:02.119 02:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:02.119 02:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.119 02:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:02.119 02:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:02.119 02:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:02.119 02:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.119 02:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.119 02:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.119 02:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.119 02:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.119 02:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.119 02:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.685 00:22:02.685 02:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.685 02:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.685 02:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.943 02:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.943 02:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.943 02:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.943 02:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.943 02:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.943 02:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.943 { 00:22:02.943 "cntlid": 129, 00:22:02.943 "qid": 0, 00:22:02.943 "state": "enabled", 00:22:02.943 "thread": "nvmf_tgt_poll_group_000", 00:22:02.943 "listen_address": { 00:22:02.943 "trtype": "TCP", 00:22:02.943 "adrfam": "IPv4", 00:22:02.943 "traddr": "10.0.0.2", 00:22:02.943 "trsvcid": "4420" 00:22:02.943 }, 00:22:02.943 "peer_address": { 00:22:02.943 "trtype": "TCP", 00:22:02.943 "adrfam": "IPv4", 00:22:02.943 "traddr": "10.0.0.1", 00:22:02.943 "trsvcid": "53912" 00:22:02.943 }, 00:22:02.943 "auth": { 00:22:02.943 "state": "completed", 00:22:02.943 "digest": "sha512", 00:22:02.943 "dhgroup": "ffdhe6144" 00:22:02.943 } 00:22:02.943 } 00:22:02.943 ]' 00:22:02.944 02:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.944 02:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.944 02:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.201 02:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:03.201 02:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.201 02:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.201 02:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.201 02:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.459 02:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:22:04.396 02:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.396 02:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.396 02:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.396 02:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.396 02:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.396 02:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:04.396 02:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:04.396 02:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:04.654 02:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:04.654 02:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.654 02:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:04.654 02:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:04.654 02:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:04.654 02:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.654 02:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.654 02:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.654 02:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.654 02:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.654 02:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.654 02:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.221 00:22:05.221 02:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.221 02:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.221 02:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.479 02:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.479 02:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.479 02:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.479 02:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.479 02:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.479 02:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.479 { 00:22:05.479 "cntlid": 131, 00:22:05.479 "qid": 0, 00:22:05.479 "state": "enabled", 00:22:05.479 "thread": "nvmf_tgt_poll_group_000", 00:22:05.479 "listen_address": { 00:22:05.479 "trtype": "TCP", 00:22:05.479 "adrfam": "IPv4", 00:22:05.479 "traddr": "10.0.0.2", 00:22:05.479 "trsvcid": "4420" 00:22:05.479 }, 00:22:05.479 "peer_address": { 00:22:05.479 "trtype": "TCP", 00:22:05.479 "adrfam": "IPv4", 00:22:05.479 "traddr": "10.0.0.1", 00:22:05.479 "trsvcid": "53942" 00:22:05.479 }, 00:22:05.479 "auth": { 00:22:05.479 "state": "completed", 00:22:05.479 "digest": "sha512", 00:22:05.479 "dhgroup": "ffdhe6144" 00:22:05.479 } 00:22:05.479 } 00:22:05.479 ]' 00:22:05.479 02:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.479 02:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.479 02:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.479 02:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:05.479 02:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:05.479 02:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.479 02:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.479 02:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.737 02:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:22:06.669 02:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.669 02:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.669 02:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.669 02:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.669 02:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.669 02:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:06.669 02:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:06.669 02:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:06.928 02:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:06.928 02:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:06.928 02:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:06.928 02:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:06.928 02:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:06.928 02:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.928 02:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.928 02:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.928 02:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.928 02:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.928 02:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.928 02:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.494 00:22:07.494 02:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.494 02:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.494 02:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:07.751 02:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.751 02:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.751 02:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.751 02:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.751 02:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.751 02:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:07.751 { 00:22:07.751 "cntlid": 133, 00:22:07.751 "qid": 0, 00:22:07.751 "state": "enabled", 00:22:07.751 "thread": "nvmf_tgt_poll_group_000", 00:22:07.751 "listen_address": { 00:22:07.751 "trtype": "TCP", 00:22:07.751 "adrfam": "IPv4", 00:22:07.751 "traddr": "10.0.0.2", 00:22:07.751 "trsvcid": "4420" 00:22:07.751 }, 00:22:07.751 "peer_address": { 00:22:07.751 "trtype": "TCP", 00:22:07.751 "adrfam": "IPv4", 00:22:07.751 "traddr": "10.0.0.1", 00:22:07.751 "trsvcid": "53962" 00:22:07.751 }, 00:22:07.751 "auth": { 00:22:07.751 "state": "completed", 00:22:07.751 "digest": "sha512", 00:22:07.751 "dhgroup": "ffdhe6144" 00:22:07.751 } 00:22:07.751 } 00:22:07.751 ]' 00:22:07.751 02:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.031 02:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.031 02:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.031 02:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:08.031 02:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.031 02:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.032 02:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.032 02:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.301 02:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:22:09.236 02:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.236 02:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.236 02:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.236 02:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.237 02:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.237 02:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.237 02:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.237 02:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.496 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:09.496 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.496 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:09.496 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:09.496 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:09.496 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.496 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:09.496 02:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.496 02:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.496 02:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.496 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:09.496 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:10.066 00:22:10.066 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.066 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.066 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.325 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.325 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.325 02:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.325 02:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.325 02:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.325 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.325 { 00:22:10.325 "cntlid": 135, 00:22:10.325 "qid": 0, 00:22:10.325 "state": "enabled", 00:22:10.325 "thread": "nvmf_tgt_poll_group_000", 00:22:10.325 "listen_address": { 00:22:10.325 "trtype": "TCP", 00:22:10.325 "adrfam": "IPv4", 00:22:10.325 "traddr": "10.0.0.2", 00:22:10.325 "trsvcid": "4420" 00:22:10.325 }, 00:22:10.325 "peer_address": { 00:22:10.325 "trtype": "TCP", 00:22:10.325 "adrfam": "IPv4", 00:22:10.325 "traddr": "10.0.0.1", 00:22:10.325 "trsvcid": "53982" 00:22:10.325 }, 00:22:10.325 "auth": { 00:22:10.325 "state": "completed", 00:22:10.325 "digest": "sha512", 00:22:10.325 "dhgroup": "ffdhe6144" 00:22:10.325 } 00:22:10.325 } 00:22:10.325 ]' 00:22:10.325 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.325 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.325 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.325 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:10.325 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.326 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.326 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.326 02:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.584 02:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:22:11.521 02:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.521 02:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.521 02:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.521 02:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.521 02:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.521 02:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:11.521 02:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:11.521 02:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:11.521 02:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:12.088 02:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:12.088 02:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.088 02:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:12.088 02:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:12.088 02:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:12.088 02:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.088 02:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.088 02:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.088 02:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.088 02:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.088 02:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.088 02:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.656 00:22:12.914 02:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.914 02:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.914 02:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.172 02:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.172 02:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.172 02:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.172 02:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.172 02:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.172 02:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:13.172 { 00:22:13.172 "cntlid": 137, 00:22:13.172 "qid": 0, 00:22:13.172 "state": "enabled", 00:22:13.172 "thread": "nvmf_tgt_poll_group_000", 00:22:13.172 "listen_address": { 00:22:13.172 "trtype": "TCP", 00:22:13.172 "adrfam": "IPv4", 00:22:13.172 "traddr": "10.0.0.2", 00:22:13.172 "trsvcid": "4420" 00:22:13.172 }, 00:22:13.172 "peer_address": { 00:22:13.172 "trtype": "TCP", 00:22:13.172 "adrfam": "IPv4", 00:22:13.173 "traddr": "10.0.0.1", 00:22:13.173 "trsvcid": "35728" 00:22:13.173 }, 00:22:13.173 "auth": { 00:22:13.173 "state": "completed", 00:22:13.173 "digest": "sha512", 00:22:13.173 "dhgroup": "ffdhe8192" 00:22:13.173 } 00:22:13.173 } 00:22:13.173 ]' 00:22:13.173 02:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:13.173 02:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.173 02:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:13.173 02:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:13.173 02:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:13.173 02:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.173 02:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.173 02:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.430 02:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:22:14.366 02:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.366 02:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.366 02:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.366 02:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.366 02:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.366 02:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.366 02:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.366 02:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.623 02:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:14.623 02:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.623 02:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:14.623 02:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:14.623 02:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:14.623 02:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.623 02:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.623 02:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.623 02:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.623 02:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.623 02:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.623 02:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.557 00:22:15.557 02:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:15.557 02:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.557 02:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.815 02:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.815 02:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.815 02:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.815 02:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.815 02:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.815 02:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.815 { 00:22:15.815 "cntlid": 139, 00:22:15.815 "qid": 0, 00:22:15.815 "state": "enabled", 00:22:15.815 "thread": "nvmf_tgt_poll_group_000", 00:22:15.815 "listen_address": { 00:22:15.815 "trtype": "TCP", 00:22:15.815 "adrfam": "IPv4", 00:22:15.815 "traddr": "10.0.0.2", 00:22:15.815 "trsvcid": "4420" 00:22:15.815 }, 00:22:15.815 "peer_address": { 00:22:15.815 "trtype": "TCP", 00:22:15.815 "adrfam": "IPv4", 00:22:15.815 "traddr": "10.0.0.1", 00:22:15.815 "trsvcid": "35764" 00:22:15.815 }, 00:22:15.815 "auth": { 00:22:15.815 "state": "completed", 00:22:15.815 "digest": "sha512", 00:22:15.815 "dhgroup": "ffdhe8192" 00:22:15.815 } 00:22:15.815 } 00:22:15.815 ]' 00:22:15.815 02:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.815 02:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.815 02:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.073 02:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:16.073 02:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.073 02:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.073 02:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.073 02:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.329 02:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzZjN2I5NTJiZDhkOTE3Y2U0YWY3M2U0ZmE0ZjUyNTb4XYZ9: --dhchap-ctrl-secret DHHC-1:02:MzA4N2Y3NjZiNmVjMjgxYjllOWQxYThkMWI0YmU5NTQ2NjNhNDcyNzQ0OTA5NGY5po3q7g==: 00:22:17.267 02:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.267 02:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.267 02:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.267 02:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.267 02:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.267 02:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:17.267 02:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.267 02:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.525 02:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:17.525 02:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:17.525 02:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:17.525 02:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:17.525 02:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:17.525 02:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.525 02:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.525 02:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.525 02:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.525 02:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.525 02:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.525 02:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.459 00:22:18.459 02:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.459 02:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.459 02:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.717 02:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.717 02:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.717 02:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.717 02:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.717 02:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.717 02:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.717 { 00:22:18.717 "cntlid": 141, 00:22:18.717 "qid": 0, 00:22:18.717 "state": "enabled", 00:22:18.717 "thread": "nvmf_tgt_poll_group_000", 00:22:18.717 "listen_address": { 00:22:18.717 "trtype": "TCP", 00:22:18.717 "adrfam": "IPv4", 00:22:18.717 "traddr": "10.0.0.2", 00:22:18.717 "trsvcid": "4420" 00:22:18.717 }, 00:22:18.717 "peer_address": { 00:22:18.717 "trtype": "TCP", 00:22:18.717 "adrfam": "IPv4", 00:22:18.717 "traddr": "10.0.0.1", 00:22:18.717 "trsvcid": "35792" 00:22:18.717 }, 00:22:18.717 "auth": { 00:22:18.717 "state": "completed", 00:22:18.717 "digest": "sha512", 00:22:18.717 "dhgroup": "ffdhe8192" 00:22:18.717 } 00:22:18.717 } 00:22:18.717 ]' 00:22:18.717 02:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.717 02:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.717 02:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.717 02:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:18.717 02:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.975 02:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.975 02:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.975 02:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.233 02:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI4ZmZiYTVkYTlkMGQ4NzU0M2IyMjU4ZWQ3NWViODg5NDAwZjlhNWJmMmU5ZjMyqF8YBg==: --dhchap-ctrl-secret DHHC-1:01:NzA2YmY5NWUzZjNlNzUwODJkNzJlOWJkZjVkNjlmZjjQdptu: 00:22:20.168 02:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.168 02:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.168 02:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.168 02:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.168 02:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.168 02:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.168 02:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:20.168 02:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:20.425 02:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:20.425 02:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.425 02:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:20.425 02:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:20.425 02:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:20.425 02:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.425 02:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:20.425 02:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.425 02:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.425 02:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.425 02:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.425 02:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.359 00:22:21.359 02:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:21.359 02:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.359 02:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:21.617 02:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.617 02:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.617 02:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.617 02:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.617 02:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.617 02:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.617 { 00:22:21.617 "cntlid": 143, 00:22:21.617 "qid": 0, 00:22:21.617 "state": "enabled", 00:22:21.617 "thread": "nvmf_tgt_poll_group_000", 00:22:21.617 "listen_address": { 00:22:21.617 "trtype": "TCP", 00:22:21.617 "adrfam": "IPv4", 00:22:21.617 "traddr": "10.0.0.2", 00:22:21.617 "trsvcid": "4420" 00:22:21.617 }, 00:22:21.617 "peer_address": { 00:22:21.617 "trtype": "TCP", 00:22:21.617 "adrfam": "IPv4", 00:22:21.617 "traddr": "10.0.0.1", 00:22:21.617 "trsvcid": "35810" 00:22:21.617 }, 00:22:21.617 "auth": { 00:22:21.617 "state": "completed", 00:22:21.617 "digest": "sha512", 00:22:21.617 "dhgroup": "ffdhe8192" 00:22:21.617 } 00:22:21.617 } 00:22:21.617 ]' 00:22:21.617 02:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.617 02:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.617 02:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.617 02:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:21.617 02:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.617 02:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.617 02:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.617 02:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.899 02:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:22:22.841 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.841 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.841 02:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.841 02:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.841 02:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.841 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:22.841 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:22.841 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:22.841 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:22.841 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:22.841 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:23.099 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:23.099 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:23.099 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:23.099 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:23.099 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:23.099 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.099 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.099 02:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.099 02:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.099 02:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.099 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.099 02:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.038 00:22:24.038 02:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:24.038 02:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:24.038 02:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.296 02:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.296 02:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.296 02:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.296 02:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.296 02:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.296 02:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:24.296 { 00:22:24.296 "cntlid": 145, 00:22:24.296 "qid": 0, 00:22:24.296 "state": "enabled", 00:22:24.296 "thread": "nvmf_tgt_poll_group_000", 00:22:24.296 "listen_address": { 00:22:24.296 "trtype": "TCP", 00:22:24.296 "adrfam": "IPv4", 00:22:24.296 "traddr": "10.0.0.2", 00:22:24.296 "trsvcid": "4420" 00:22:24.296 }, 00:22:24.296 "peer_address": { 00:22:24.296 "trtype": "TCP", 00:22:24.296 "adrfam": "IPv4", 00:22:24.296 "traddr": "10.0.0.1", 00:22:24.296 "trsvcid": "50414" 00:22:24.296 }, 00:22:24.296 "auth": { 00:22:24.296 "state": "completed", 00:22:24.296 "digest": "sha512", 00:22:24.296 "dhgroup": "ffdhe8192" 00:22:24.296 } 00:22:24.296 } 00:22:24.296 ]' 00:22:24.296 02:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:24.296 02:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.296 02:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:24.554 02:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:24.554 02:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:24.554 02:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.554 02:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.554 02:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.813 02:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjQzZDA1YjUwNTY1OGM0ZGEyNjQzZjYzOWZjYTAxMGQxODhiMmI5MzhkOTU2NjFkbL3+dA==: --dhchap-ctrl-secret DHHC-1:03:MzVhYTY3MmMzYTU3M2U3ZWM3ZGIyMDFhODM2MTA1MGNjOThjOTYwMTc5MmE0NGU0ZTg2ZjViNThhZWE1NTUxOJvhbFc=: 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:25.747 02:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:26.682 request: 00:22:26.682 { 00:22:26.682 "name": "nvme0", 00:22:26.682 "trtype": "tcp", 00:22:26.682 "traddr": "10.0.0.2", 00:22:26.682 "adrfam": "ipv4", 00:22:26.682 "trsvcid": "4420", 00:22:26.682 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:26.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:26.682 "prchk_reftag": false, 00:22:26.682 "prchk_guard": false, 00:22:26.682 "hdgst": false, 00:22:26.682 "ddgst": false, 00:22:26.682 "dhchap_key": "key2", 00:22:26.682 "method": "bdev_nvme_attach_controller", 00:22:26.682 "req_id": 1 00:22:26.682 } 00:22:26.682 Got JSON-RPC error response 00:22:26.682 response: 00:22:26.682 { 00:22:26.682 "code": -5, 00:22:26.682 "message": "Input/output error" 00:22:26.682 } 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:26.682 02:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:27.253 request: 00:22:27.253 { 00:22:27.253 "name": "nvme0", 00:22:27.253 "trtype": "tcp", 00:22:27.253 "traddr": "10.0.0.2", 00:22:27.253 "adrfam": "ipv4", 00:22:27.253 "trsvcid": "4420", 00:22:27.253 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:27.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:27.253 "prchk_reftag": false, 00:22:27.253 "prchk_guard": false, 00:22:27.253 "hdgst": false, 00:22:27.253 "ddgst": false, 00:22:27.253 "dhchap_key": "key1", 00:22:27.253 "dhchap_ctrlr_key": "ckey2", 00:22:27.253 "method": "bdev_nvme_attach_controller", 00:22:27.253 "req_id": 1 00:22:27.253 } 00:22:27.253 Got JSON-RPC error response 00:22:27.253 response: 00:22:27.253 { 00:22:27.253 "code": -5, 00:22:27.253 "message": "Input/output error" 00:22:27.253 } 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.512 02:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.084 request: 00:22:28.084 { 00:22:28.084 "name": "nvme0", 00:22:28.084 "trtype": "tcp", 00:22:28.084 "traddr": "10.0.0.2", 00:22:28.084 "adrfam": "ipv4", 00:22:28.084 "trsvcid": "4420", 00:22:28.084 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:28.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:28.084 "prchk_reftag": false, 00:22:28.084 "prchk_guard": false, 00:22:28.084 "hdgst": false, 00:22:28.084 "ddgst": false, 00:22:28.084 "dhchap_key": "key1", 00:22:28.084 "dhchap_ctrlr_key": "ckey1", 00:22:28.084 "method": "bdev_nvme_attach_controller", 00:22:28.084 "req_id": 1 00:22:28.084 } 00:22:28.084 Got JSON-RPC error response 00:22:28.084 response: 00:22:28.084 { 00:22:28.084 "code": -5, 00:22:28.084 "message": "Input/output error" 00:22:28.084 } 00:22:28.084 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:28.084 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:28.084 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:28.085 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:28.085 02:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.085 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.085 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.085 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.085 02:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1596483 00:22:28.085 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1596483 ']' 00:22:28.085 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1596483 00:22:28.344 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:28.344 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:28.344 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1596483 00:22:28.344 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:28.344 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:28.344 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1596483' 00:22:28.344 killing process with pid 1596483 00:22:28.344 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1596483 00:22:28.344 02:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1596483 00:22:28.603 02:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:28.603 02:09:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:28.603 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:28.603 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.603 02:09:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1619141 00:22:28.603 02:09:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:28.603 02:09:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1619141 00:22:28.603 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1619141 ']' 00:22:28.603 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.603 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.603 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.603 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.603 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.861 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.861 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:28.861 02:09:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.861 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:28.861 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.861 02:09:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.861 02:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:28.861 02:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1619141 00:22:28.861 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1619141 ']' 00:22:28.861 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.861 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.861 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.861 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.861 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:29.119 02:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:30.052 00:22:30.052 02:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.052 02:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.052 02:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.310 02:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.310 02:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.310 02:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.310 02:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.310 02:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.310 02:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:30.310 { 00:22:30.310 "cntlid": 1, 00:22:30.310 "qid": 0, 00:22:30.310 "state": "enabled", 00:22:30.310 "thread": "nvmf_tgt_poll_group_000", 00:22:30.310 "listen_address": { 00:22:30.310 "trtype": "TCP", 00:22:30.310 "adrfam": "IPv4", 00:22:30.310 "traddr": "10.0.0.2", 00:22:30.310 "trsvcid": "4420" 00:22:30.310 }, 00:22:30.310 "peer_address": { 00:22:30.310 "trtype": "TCP", 00:22:30.310 "adrfam": "IPv4", 00:22:30.310 "traddr": "10.0.0.1", 00:22:30.310 "trsvcid": "50468" 00:22:30.310 }, 00:22:30.310 "auth": { 00:22:30.310 "state": "completed", 00:22:30.310 "digest": "sha512", 00:22:30.310 "dhgroup": "ffdhe8192" 00:22:30.310 } 00:22:30.310 } 00:22:30.310 ]' 00:22:30.310 02:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:30.310 02:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.310 02:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.567 02:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:30.568 02:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.568 02:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.568 02:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.568 02:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.827 02:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmM0NzcyOGIxY2U0NTVmMzk5ZDY4YjFlODMzOWZjYWE3YjlkOTgxNjYwYzY1YmNlYzFkM2I5MWY4ZmY2ZWJhN3oquY4=: 00:22:31.764 02:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.764 02:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.764 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.764 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.764 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.764 02:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:31.764 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.764 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.764 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.764 02:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:31.764 02:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:32.022 02:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.022 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:32.022 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.022 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:32.022 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.022 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:32.022 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.022 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.022 02:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.280 request: 00:22:32.280 { 00:22:32.280 "name": "nvme0", 00:22:32.280 "trtype": "tcp", 00:22:32.280 "traddr": "10.0.0.2", 00:22:32.280 "adrfam": "ipv4", 00:22:32.280 "trsvcid": "4420", 00:22:32.280 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:32.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:32.280 "prchk_reftag": false, 00:22:32.280 "prchk_guard": false, 00:22:32.280 "hdgst": false, 00:22:32.280 "ddgst": false, 00:22:32.280 "dhchap_key": "key3", 00:22:32.280 "method": "bdev_nvme_attach_controller", 00:22:32.280 "req_id": 1 00:22:32.280 } 00:22:32.280 Got JSON-RPC error response 00:22:32.280 response: 00:22:32.280 { 00:22:32.280 "code": -5, 00:22:32.280 "message": "Input/output error" 00:22:32.280 } 00:22:32.280 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:32.280 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:32.280 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:32.280 02:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:32.280 02:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:32.280 02:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:32.280 02:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:32.280 02:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:32.539 02:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.539 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:32.539 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.539 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:32.539 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.539 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:32.539 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.539 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.539 02:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.798 request: 00:22:32.798 { 00:22:32.798 "name": "nvme0", 00:22:32.798 "trtype": "tcp", 00:22:32.798 "traddr": "10.0.0.2", 00:22:32.798 "adrfam": "ipv4", 00:22:32.798 "trsvcid": "4420", 00:22:32.798 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:32.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:32.798 "prchk_reftag": false, 00:22:32.798 "prchk_guard": false, 00:22:32.798 "hdgst": false, 00:22:32.798 "ddgst": false, 00:22:32.798 "dhchap_key": "key3", 00:22:32.798 "method": "bdev_nvme_attach_controller", 00:22:32.798 "req_id": 1 00:22:32.798 } 00:22:32.798 Got JSON-RPC error response 00:22:32.798 response: 00:22:32.798 { 00:22:32.798 "code": -5, 00:22:32.798 "message": "Input/output error" 00:22:32.798 } 00:22:32.798 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:32.798 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:32.798 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:32.798 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:32.798 02:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:32.798 02:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:32.798 02:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:32.798 02:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:32.798 02:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:32.798 02:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:33.056 02:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:33.314 request: 00:22:33.314 { 00:22:33.314 "name": "nvme0", 00:22:33.314 "trtype": "tcp", 00:22:33.314 "traddr": "10.0.0.2", 00:22:33.314 "adrfam": "ipv4", 00:22:33.314 "trsvcid": "4420", 00:22:33.314 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:33.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:33.314 "prchk_reftag": false, 00:22:33.314 "prchk_guard": false, 00:22:33.314 "hdgst": false, 00:22:33.314 "ddgst": false, 00:22:33.314 "dhchap_key": "key0", 00:22:33.314 "dhchap_ctrlr_key": "key1", 00:22:33.314 "method": "bdev_nvme_attach_controller", 00:22:33.314 "req_id": 1 00:22:33.314 } 00:22:33.314 Got JSON-RPC error response 00:22:33.314 response: 00:22:33.314 { 00:22:33.314 "code": -5, 00:22:33.314 "message": "Input/output error" 00:22:33.314 } 00:22:33.314 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:33.314 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.314 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.314 02:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.314 02:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:33.314 02:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:33.574 00:22:33.574 02:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:33.574 02:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:33.574 02:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.832 02:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.832 02:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.832 02:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.090 02:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:34.090 02:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:34.090 02:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1596503 00:22:34.090 02:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1596503 ']' 00:22:34.090 02:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1596503 00:22:34.090 02:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:34.090 02:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:34.090 02:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1596503 00:22:34.090 02:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:34.090 02:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:34.090 02:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1596503' 00:22:34.090 killing process with pid 1596503 00:22:34.090 02:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1596503 00:22:34.090 02:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1596503 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:34.659 rmmod nvme_tcp 00:22:34.659 rmmod nvme_fabrics 00:22:34.659 rmmod nvme_keyring 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1619141 ']' 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1619141 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1619141 ']' 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1619141 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1619141 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1619141' 00:22:34.659 killing process with pid 1619141 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1619141 00:22:34.659 02:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1619141 00:22:34.918 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:34.918 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:34.918 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:34.918 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:34.918 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:34.918 02:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.918 02:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.918 02:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.830 02:09:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:36.830 02:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.MNa /tmp/spdk.key-sha256.u4r /tmp/spdk.key-sha384.3mM /tmp/spdk.key-sha512.ZzA /tmp/spdk.key-sha512.MqJ /tmp/spdk.key-sha384.Vt5 /tmp/spdk.key-sha256.JUT '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:36.830 00:22:36.830 real 3m10.172s 00:22:36.830 user 7m23.017s 00:22:36.830 sys 0m25.010s 00:22:36.830 02:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:36.830 02:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.830 ************************************ 00:22:36.830 END TEST nvmf_auth_target 00:22:36.830 ************************************ 00:22:36.831 02:09:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:36.831 02:09:42 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:36.831 02:09:42 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:36.831 02:09:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:36.831 02:09:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:36.831 02:09:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:36.831 ************************************ 00:22:36.831 START TEST nvmf_bdevio_no_huge 00:22:36.831 ************************************ 00:22:36.831 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:37.090 * Looking for test storage... 00:22:37.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:37.090 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:37.091 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:37.091 02:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.996 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:38.997 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:38.997 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:38.997 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:38.997 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:22:38.997 00:22:38.997 --- 10.0.0.2 ping statistics --- 00:22:38.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.997 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:22:38.997 00:22:38.997 --- 10.0.0.1 ping statistics --- 00:22:38.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.997 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1621892 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1621892 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1621892 ']' 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.997 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:38.997 [2024-07-14 02:09:44.608142] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:38.997 [2024-07-14 02:09:44.608229] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:38.997 [2024-07-14 02:09:44.674603] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.256 [2024-07-14 02:09:44.756759] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.256 [2024-07-14 02:09:44.756826] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.256 [2024-07-14 02:09:44.756840] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.256 [2024-07-14 02:09:44.756851] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.256 [2024-07-14 02:09:44.756861] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.256 [2024-07-14 02:09:44.756963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:39.256 [2024-07-14 02:09:44.757038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:39.256 [2024-07-14 02:09:44.757109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:39.256 [2024-07-14 02:09:44.757111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.256 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.256 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:22:39.256 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.257 [2024-07-14 02:09:44.877044] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.257 Malloc0 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.257 [2024-07-14 02:09:44.915614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:39.257 { 00:22:39.257 "params": { 00:22:39.257 "name": "Nvme$subsystem", 00:22:39.257 "trtype": "$TEST_TRANSPORT", 00:22:39.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.257 "adrfam": "ipv4", 00:22:39.257 "trsvcid": "$NVMF_PORT", 00:22:39.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.257 "hdgst": ${hdgst:-false}, 00:22:39.257 "ddgst": ${ddgst:-false} 00:22:39.257 }, 00:22:39.257 "method": "bdev_nvme_attach_controller" 00:22:39.257 } 00:22:39.257 EOF 00:22:39.257 )") 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:39.257 02:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:39.257 "params": { 00:22:39.257 "name": "Nvme1", 00:22:39.257 "trtype": "tcp", 00:22:39.257 "traddr": "10.0.0.2", 00:22:39.257 "adrfam": "ipv4", 00:22:39.257 "trsvcid": "4420", 00:22:39.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:39.257 "hdgst": false, 00:22:39.257 "ddgst": false 00:22:39.257 }, 00:22:39.257 "method": "bdev_nvme_attach_controller" 00:22:39.257 }' 00:22:39.516 [2024-07-14 02:09:44.961824] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:39.516 [2024-07-14 02:09:44.961935] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1621920 ] 00:22:39.516 [2024-07-14 02:09:45.023647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:39.516 [2024-07-14 02:09:45.111058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.516 [2024-07-14 02:09:45.111107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.516 [2024-07-14 02:09:45.111110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.774 I/O targets: 00:22:39.774 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:39.774 00:22:39.774 00:22:39.774 CUnit - A unit testing framework for C - Version 2.1-3 00:22:39.774 http://cunit.sourceforge.net/ 00:22:39.774 00:22:39.774 00:22:39.774 Suite: bdevio tests on: Nvme1n1 00:22:39.774 Test: blockdev write read block ...passed 00:22:39.774 Test: blockdev write zeroes read block ...passed 00:22:39.774 Test: blockdev write zeroes read no split ...passed 00:22:39.774 Test: blockdev write zeroes read split ...passed 00:22:40.031 Test: blockdev write zeroes read split partial ...passed 00:22:40.031 Test: blockdev reset ...[2024-07-14 02:09:45.493357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:40.031 [2024-07-14 02:09:45.493474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159eb00 (9): Bad file descriptor 00:22:40.031 [2024-07-14 02:09:45.670124] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:40.031 passed 00:22:40.031 Test: blockdev write read 8 blocks ...passed 00:22:40.031 Test: blockdev write read size > 128k ...passed 00:22:40.031 Test: blockdev write read invalid size ...passed 00:22:40.288 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:40.288 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:40.288 Test: blockdev write read max offset ...passed 00:22:40.288 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:40.288 Test: blockdev writev readv 8 blocks ...passed 00:22:40.288 Test: blockdev writev readv 30 x 1block ...passed 00:22:40.288 Test: blockdev writev readv block ...passed 00:22:40.288 Test: blockdev writev readv size > 128k ...passed 00:22:40.288 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:40.288 Test: blockdev comparev and writev ...[2024-07-14 02:09:45.891153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.288 [2024-07-14 02:09:45.891189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.288 [2024-07-14 02:09:45.891213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.288 [2024-07-14 02:09:45.891231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:40.288 [2024-07-14 02:09:45.891651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.288 [2024-07-14 02:09:45.891677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:40.288 [2024-07-14 02:09:45.891699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.288 [2024-07-14 02:09:45.891715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:40.288 [2024-07-14 02:09:45.892125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.288 [2024-07-14 02:09:45.892150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:40.288 [2024-07-14 02:09:45.892177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.288 [2024-07-14 02:09:45.892195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:40.288 [2024-07-14 02:09:45.892600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.288 [2024-07-14 02:09:45.892624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:40.288 [2024-07-14 02:09:45.892646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.288 [2024-07-14 02:09:45.892662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:40.288 passed 00:22:40.288 Test: blockdev nvme passthru rw ...passed 00:22:40.288 Test: blockdev nvme passthru vendor specific ...[2024-07-14 02:09:45.976286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:40.288 [2024-07-14 02:09:45.976313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:40.288 [2024-07-14 02:09:45.976550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:40.288 [2024-07-14 02:09:45.976574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:40.288 [2024-07-14 02:09:45.976791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:40.288 [2024-07-14 02:09:45.976815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:40.288 [2024-07-14 02:09:45.977050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:40.288 [2024-07-14 02:09:45.977074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:40.288 passed 00:22:40.547 Test: blockdev nvme admin passthru ...passed 00:22:40.547 Test: blockdev copy ...passed 00:22:40.547 00:22:40.547 Run Summary: Type Total Ran Passed Failed Inactive 00:22:40.547 suites 1 1 n/a 0 0 00:22:40.547 tests 23 23 23 0 0 00:22:40.547 asserts 152 152 152 0 n/a 00:22:40.547 00:22:40.547 Elapsed time = 1.530 seconds 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:40.806 rmmod nvme_tcp 00:22:40.806 rmmod nvme_fabrics 00:22:40.806 rmmod nvme_keyring 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1621892 ']' 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1621892 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1621892 ']' 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1621892 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1621892 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1621892' 00:22:40.806 killing process with pid 1621892 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1621892 00:22:40.806 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1621892 00:22:41.373 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:41.373 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:41.373 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:41.373 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:41.373 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:41.373 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.373 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.373 02:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.279 02:09:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:43.279 00:22:43.279 real 0m6.385s 00:22:43.279 user 0m10.917s 00:22:43.279 sys 0m2.453s 00:22:43.279 02:09:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:43.279 02:09:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.279 ************************************ 00:22:43.279 END TEST nvmf_bdevio_no_huge 00:22:43.279 ************************************ 00:22:43.279 02:09:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:43.279 02:09:48 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:43.279 02:09:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:43.279 02:09:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:43.279 02:09:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:43.279 ************************************ 00:22:43.279 START TEST nvmf_tls 00:22:43.279 ************************************ 00:22:43.279 02:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:43.279 * Looking for test storage... 00:22:43.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:43.538 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.539 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:43.539 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:43.539 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:43.539 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.539 02:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.539 02:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.539 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:43.539 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:43.539 02:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:43.539 02:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:45.441 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:45.442 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:45.442 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:45.442 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:45.442 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:45.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:22:45.442 00:22:45.442 --- 10.0.0.2 ping statistics --- 00:22:45.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.442 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:22:45.442 00:22:45.442 --- 10.0.0.1 ping statistics --- 00:22:45.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.442 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1623989 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1623989 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1623989 ']' 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.442 02:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.442 [2024-07-14 02:09:51.043079] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:45.442 [2024-07-14 02:09:51.043170] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.442 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.442 [2024-07-14 02:09:51.110117] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.700 [2024-07-14 02:09:51.195903] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.700 [2024-07-14 02:09:51.195976] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.700 [2024-07-14 02:09:51.195990] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.700 [2024-07-14 02:09:51.196017] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.700 [2024-07-14 02:09:51.196027] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.700 [2024-07-14 02:09:51.196059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.700 02:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.700 02:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:45.700 02:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:45.700 02:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:45.700 02:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.700 02:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.700 02:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:45.700 02:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:45.959 true 00:22:45.959 02:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:45.959 02:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:46.219 02:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:46.219 02:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:46.219 02:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:46.479 02:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:46.479 02:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:46.739 02:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:46.739 02:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:46.739 02:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:46.998 02:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:46.998 02:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:47.257 02:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:47.257 02:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:47.257 02:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:47.257 02:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:47.516 02:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:47.516 02:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:47.516 02:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:47.779 02:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:47.779 02:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:48.037 02:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:48.037 02:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:48.037 02:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:48.295 02:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:48.295 02:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.FGv67Nbo8f 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.6pFl1sCf7A 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.FGv67Nbo8f 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.6pFl1sCf7A 00:22:48.554 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:48.812 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:49.379 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.FGv67Nbo8f 00:22:49.379 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.FGv67Nbo8f 00:22:49.379 02:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:49.637 [2024-07-14 02:09:55.153383] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.637 02:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:49.895 02:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:50.153 [2024-07-14 02:09:55.634640] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:50.153 [2024-07-14 02:09:55.634863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.153 02:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:50.412 malloc0 00:22:50.412 02:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:50.735 02:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FGv67Nbo8f 00:22:51.010 [2024-07-14 02:09:56.472664] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:51.010 02:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.FGv67Nbo8f 00:22:51.010 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.995 Initializing NVMe Controllers 00:23:00.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:00.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:00.995 Initialization complete. Launching workers. 00:23:00.995 ======================================================== 00:23:00.995 Latency(us) 00:23:00.995 Device Information : IOPS MiB/s Average min max 00:23:00.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7717.90 30.15 8295.22 1160.81 9410.49 00:23:00.995 ======================================================== 00:23:00.995 Total : 7717.90 30.15 8295.22 1160.81 9410.49 00:23:00.995 00:23:00.995 02:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FGv67Nbo8f 00:23:00.995 02:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:00.995 02:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:00.995 02:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:00.995 02:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FGv67Nbo8f' 00:23:00.995 02:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:00.995 02:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1625879 00:23:00.995 02:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:00.995 02:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:00.995 02:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1625879 /var/tmp/bdevperf.sock 00:23:00.995 02:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1625879 ']' 00:23:00.995 02:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.995 02:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.995 02:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.995 02:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.995 02:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.995 [2024-07-14 02:10:06.644472] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:00.995 [2024-07-14 02:10:06.644550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625879 ] 00:23:00.995 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.252 [2024-07-14 02:10:06.704544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.252 [2024-07-14 02:10:06.793705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.252 02:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.252 02:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:01.252 02:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FGv67Nbo8f 00:23:01.510 [2024-07-14 02:10:07.123895] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:01.510 [2024-07-14 02:10:07.124059] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:01.510 TLSTESTn1 00:23:01.770 02:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:01.770 Running I/O for 10 seconds... 00:23:11.752 00:23:11.752 Latency(us) 00:23:11.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.752 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:11.752 Verification LBA range: start 0x0 length 0x2000 00:23:11.752 TLSTESTn1 : 10.06 1600.22 6.25 0.00 0.00 79768.93 6213.78 145247.19 00:23:11.752 =================================================================================================================== 00:23:11.752 Total : 1600.22 6.25 0.00 0.00 79768.93 6213.78 145247.19 00:23:11.752 0 00:23:11.752 02:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:11.752 02:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1625879 00:23:11.752 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1625879 ']' 00:23:11.752 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1625879 00:23:11.752 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:11.752 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:11.752 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1625879 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1625879' 00:23:12.011 killing process with pid 1625879 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1625879 00:23:12.011 Received shutdown signal, test time was about 10.000000 seconds 00:23:12.011 00:23:12.011 Latency(us) 00:23:12.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.011 =================================================================================================================== 00:23:12.011 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:12.011 [2024-07-14 02:10:17.461532] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1625879 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6pFl1sCf7A 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6pFl1sCf7A 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6pFl1sCf7A 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6pFl1sCf7A' 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1627188 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1627188 /var/tmp/bdevperf.sock 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1627188 ']' 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.011 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.271 [2024-07-14 02:10:17.727015] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:12.271 [2024-07-14 02:10:17.727092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1627188 ] 00:23:12.271 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.271 [2024-07-14 02:10:17.787445] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.271 [2024-07-14 02:10:17.877074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.530 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.530 02:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:12.530 02:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6pFl1sCf7A 00:23:12.788 [2024-07-14 02:10:18.264368] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.788 [2024-07-14 02:10:18.264512] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:12.788 [2024-07-14 02:10:18.270052] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:12.788 [2024-07-14 02:10:18.270319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a4bb0 (107): Transport endpoint is not connected 00:23:12.788 [2024-07-14 02:10:18.271307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a4bb0 (9): Bad file descriptor 00:23:12.788 [2024-07-14 02:10:18.272313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.788 [2024-07-14 02:10:18.272334] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:12.788 [2024-07-14 02:10:18.272365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.788 request: 00:23:12.788 { 00:23:12.788 "name": "TLSTEST", 00:23:12.788 "trtype": "tcp", 00:23:12.788 "traddr": "10.0.0.2", 00:23:12.788 "adrfam": "ipv4", 00:23:12.788 "trsvcid": "4420", 00:23:12.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.788 "prchk_reftag": false, 00:23:12.788 "prchk_guard": false, 00:23:12.788 "hdgst": false, 00:23:12.788 "ddgst": false, 00:23:12.788 "psk": "/tmp/tmp.6pFl1sCf7A", 00:23:12.788 "method": "bdev_nvme_attach_controller", 00:23:12.788 "req_id": 1 00:23:12.788 } 00:23:12.788 Got JSON-RPC error response 00:23:12.788 response: 00:23:12.788 { 00:23:12.788 "code": -5, 00:23:12.788 "message": "Input/output error" 00:23:12.788 } 00:23:12.788 02:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1627188 00:23:12.788 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1627188 ']' 00:23:12.788 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1627188 00:23:12.788 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:12.788 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:12.788 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1627188 00:23:12.788 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:12.788 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:12.788 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1627188' 00:23:12.788 killing process with pid 1627188 00:23:12.788 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1627188 00:23:12.788 Received shutdown signal, test time was about 10.000000 seconds 00:23:12.788 00:23:12.788 Latency(us) 00:23:12.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.788 =================================================================================================================== 00:23:12.788 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:12.788 [2024-07-14 02:10:18.324503] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:12.788 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1627188 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FGv67Nbo8f 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FGv67Nbo8f 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FGv67Nbo8f 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FGv67Nbo8f' 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1627213 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1627213 /var/tmp/bdevperf.sock 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1627213 ']' 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.048 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.048 [2024-07-14 02:10:18.581324] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:13.048 [2024-07-14 02:10:18.581405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1627213 ] 00:23:13.048 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.048 [2024-07-14 02:10:18.640577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.048 [2024-07-14 02:10:18.722499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.307 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.307 02:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:13.307 02:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.FGv67Nbo8f 00:23:13.565 [2024-07-14 02:10:19.057056] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.565 [2024-07-14 02:10:19.057200] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:13.565 [2024-07-14 02:10:19.062522] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:13.565 [2024-07-14 02:10:19.062553] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:13.565 [2024-07-14 02:10:19.062608] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:13.565 [2024-07-14 02:10:19.063118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf69bb0 (107): Transport endpoint is not connected 00:23:13.565 [2024-07-14 02:10:19.064105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf69bb0 (9): Bad file descriptor 00:23:13.565 [2024-07-14 02:10:19.065104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:13.565 [2024-07-14 02:10:19.065124] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:13.565 [2024-07-14 02:10:19.065157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:13.565 request: 00:23:13.565 { 00:23:13.565 "name": "TLSTEST", 00:23:13.565 "trtype": "tcp", 00:23:13.565 "traddr": "10.0.0.2", 00:23:13.565 "adrfam": "ipv4", 00:23:13.565 "trsvcid": "4420", 00:23:13.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.565 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:13.565 "prchk_reftag": false, 00:23:13.565 "prchk_guard": false, 00:23:13.565 "hdgst": false, 00:23:13.565 "ddgst": false, 00:23:13.565 "psk": "/tmp/tmp.FGv67Nbo8f", 00:23:13.565 "method": "bdev_nvme_attach_controller", 00:23:13.565 "req_id": 1 00:23:13.565 } 00:23:13.565 Got JSON-RPC error response 00:23:13.565 response: 00:23:13.565 { 00:23:13.565 "code": -5, 00:23:13.565 "message": "Input/output error" 00:23:13.565 } 00:23:13.565 02:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1627213 00:23:13.565 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1627213 ']' 00:23:13.565 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1627213 00:23:13.565 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:13.565 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:13.565 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1627213 00:23:13.565 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:13.565 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:13.565 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1627213' 00:23:13.565 killing process with pid 1627213 00:23:13.565 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1627213 00:23:13.565 Received shutdown signal, test time was about 10.000000 seconds 00:23:13.565 00:23:13.565 Latency(us) 00:23:13.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.565 =================================================================================================================== 00:23:13.566 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:13.566 [2024-07-14 02:10:19.109199] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:13.566 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1627213 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FGv67Nbo8f 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FGv67Nbo8f 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FGv67Nbo8f 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FGv67Nbo8f' 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1627342 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1627342 /var/tmp/bdevperf.sock 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1627342 ']' 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.824 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.824 [2024-07-14 02:10:19.342656] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:13.824 [2024-07-14 02:10:19.342734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1627342 ] 00:23:13.824 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.824 [2024-07-14 02:10:19.403433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.824 [2024-07-14 02:10:19.488488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.083 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.083 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:14.083 02:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FGv67Nbo8f 00:23:14.341 [2024-07-14 02:10:19.815242] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.341 [2024-07-14 02:10:19.815384] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:14.341 [2024-07-14 02:10:19.822350] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:14.341 [2024-07-14 02:10:19.822381] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:14.341 [2024-07-14 02:10:19.822435] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:14.341 [2024-07-14 02:10:19.823286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd70bb0 (107): Transport endpoint is not connected 00:23:14.341 [2024-07-14 02:10:19.824275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd70bb0 (9): Bad file descriptor 00:23:14.341 [2024-07-14 02:10:19.825274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:14.341 [2024-07-14 02:10:19.825297] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:14.341 [2024-07-14 02:10:19.825329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:14.341 request: 00:23:14.341 { 00:23:14.341 "name": "TLSTEST", 00:23:14.341 "trtype": "tcp", 00:23:14.341 "traddr": "10.0.0.2", 00:23:14.341 "adrfam": "ipv4", 00:23:14.341 "trsvcid": "4420", 00:23:14.341 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:14.341 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.341 "prchk_reftag": false, 00:23:14.341 "prchk_guard": false, 00:23:14.341 "hdgst": false, 00:23:14.341 "ddgst": false, 00:23:14.341 "psk": "/tmp/tmp.FGv67Nbo8f", 00:23:14.341 "method": "bdev_nvme_attach_controller", 00:23:14.341 "req_id": 1 00:23:14.341 } 00:23:14.341 Got JSON-RPC error response 00:23:14.341 response: 00:23:14.341 { 00:23:14.342 "code": -5, 00:23:14.342 "message": "Input/output error" 00:23:14.342 } 00:23:14.342 02:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1627342 00:23:14.342 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1627342 ']' 00:23:14.342 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1627342 00:23:14.342 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:14.342 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.342 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1627342 00:23:14.342 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:14.342 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:14.342 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1627342' 00:23:14.342 killing process with pid 1627342 00:23:14.342 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1627342 00:23:14.342 Received shutdown signal, test time was about 10.000000 seconds 00:23:14.342 00:23:14.342 Latency(us) 00:23:14.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.342 =================================================================================================================== 00:23:14.342 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:14.342 [2024-07-14 02:10:19.872779] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:14.342 02:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1627342 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1627478 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1627478 /var/tmp/bdevperf.sock 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1627478 ']' 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.600 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.601 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.601 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.601 [2024-07-14 02:10:20.132749] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:14.601 [2024-07-14 02:10:20.132827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1627478 ] 00:23:14.601 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.601 [2024-07-14 02:10:20.193806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.601 [2024-07-14 02:10:20.275623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.859 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.859 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:14.859 02:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:15.117 [2024-07-14 02:10:20.614521] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:15.117 [2024-07-14 02:10:20.615954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x560160 (9): Bad file descriptor 00:23:15.117 [2024-07-14 02:10:20.616950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:15.117 [2024-07-14 02:10:20.616971] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:15.117 [2024-07-14 02:10:20.617003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:15.117 request: 00:23:15.117 { 00:23:15.117 "name": "TLSTEST", 00:23:15.117 "trtype": "tcp", 00:23:15.117 "traddr": "10.0.0.2", 00:23:15.117 "adrfam": "ipv4", 00:23:15.117 "trsvcid": "4420", 00:23:15.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.117 "prchk_reftag": false, 00:23:15.117 "prchk_guard": false, 00:23:15.117 "hdgst": false, 00:23:15.117 "ddgst": false, 00:23:15.117 "method": "bdev_nvme_attach_controller", 00:23:15.117 "req_id": 1 00:23:15.117 } 00:23:15.117 Got JSON-RPC error response 00:23:15.117 response: 00:23:15.117 { 00:23:15.117 "code": -5, 00:23:15.117 "message": "Input/output error" 00:23:15.117 } 00:23:15.117 02:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1627478 00:23:15.117 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1627478 ']' 00:23:15.117 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1627478 00:23:15.117 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:15.117 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.117 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1627478 00:23:15.117 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:15.117 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:15.117 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1627478' 00:23:15.117 killing process with pid 1627478 00:23:15.117 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1627478 00:23:15.117 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.117 00:23:15.117 Latency(us) 00:23:15.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.117 =================================================================================================================== 00:23:15.117 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:15.117 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1627478 00:23:15.375 02:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:15.375 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:15.375 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:15.375 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:15.375 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:15.375 02:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1623989 00:23:15.375 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1623989 ']' 00:23:15.375 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1623989 00:23:15.375 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:15.375 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.375 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1623989 00:23:15.375 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:15.375 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:15.375 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1623989' 00:23:15.375 killing process with pid 1623989 00:23:15.375 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1623989 00:23:15.375 [2024-07-14 02:10:20.878336] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:15.375 02:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1623989 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.392SeEuL9o 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.392SeEuL9o 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1627627 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1627627 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1627627 ']' 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:15.633 02:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.633 [2024-07-14 02:10:21.199131] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:15.633 [2024-07-14 02:10:21.199216] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.633 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.633 [2024-07-14 02:10:21.267630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.891 [2024-07-14 02:10:21.362569] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.891 [2024-07-14 02:10:21.362632] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.891 [2024-07-14 02:10:21.362649] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.891 [2024-07-14 02:10:21.362671] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.891 [2024-07-14 02:10:21.362684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.891 [2024-07-14 02:10:21.362716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.891 02:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.891 02:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:15.891 02:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:15.891 02:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:15.891 02:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.891 02:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.891 02:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.392SeEuL9o 00:23:15.891 02:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.392SeEuL9o 00:23:15.891 02:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:16.149 [2024-07-14 02:10:21.723451] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.149 02:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:16.405 02:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:16.663 [2024-07-14 02:10:22.236825] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:16.663 [2024-07-14 02:10:22.237051] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.663 02:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:16.922 malloc0 00:23:16.922 02:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:17.187 02:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.392SeEuL9o 00:23:17.484 [2024-07-14 02:10:22.981974] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:17.484 02:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.392SeEuL9o 00:23:17.484 02:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:17.484 02:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:17.484 02:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:17.484 02:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.392SeEuL9o' 00:23:17.484 02:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:17.484 02:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1627793 00:23:17.484 02:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:17.484 02:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:17.484 02:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1627793 /var/tmp/bdevperf.sock 00:23:17.484 02:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1627793 ']' 00:23:17.484 02:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.484 02:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.484 02:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.484 02:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.484 02:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.484 [2024-07-14 02:10:23.047293] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:17.484 [2024-07-14 02:10:23.047376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1627793 ] 00:23:17.484 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.484 [2024-07-14 02:10:23.110128] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.751 [2024-07-14 02:10:23.201828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.751 02:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.751 02:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:17.751 02:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.392SeEuL9o 00:23:18.008 [2024-07-14 02:10:23.580082] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.008 [2024-07-14 02:10:23.580219] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:18.008 TLSTESTn1 00:23:18.008 02:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:18.265 Running I/O for 10 seconds... 00:23:28.264 00:23:28.264 Latency(us) 00:23:28.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.264 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:28.264 Verification LBA range: start 0x0 length 0x2000 00:23:28.264 TLSTESTn1 : 10.06 1908.64 7.46 0.00 0.00 66867.81 6650.69 96702.01 00:23:28.264 =================================================================================================================== 00:23:28.264 Total : 1908.64 7.46 0.00 0.00 66867.81 6650.69 96702.01 00:23:28.264 0 00:23:28.264 02:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.264 02:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1627793 00:23:28.264 02:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1627793 ']' 00:23:28.264 02:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1627793 00:23:28.264 02:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:28.264 02:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:28.264 02:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1627793 00:23:28.264 02:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:28.264 02:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:28.264 02:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1627793' 00:23:28.264 killing process with pid 1627793 00:23:28.264 02:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1627793 00:23:28.264 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.264 00:23:28.264 Latency(us) 00:23:28.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.264 =================================================================================================================== 00:23:28.264 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.264 [2024-07-14 02:10:33.910103] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:28.264 02:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1627793 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.392SeEuL9o 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.392SeEuL9o 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.392SeEuL9o 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.392SeEuL9o 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.392SeEuL9o' 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1629109 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1629109 /var/tmp/bdevperf.sock 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1629109 ']' 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:28.522 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.522 [2024-07-14 02:10:34.160307] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:28.522 [2024-07-14 02:10:34.160387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1629109 ] 00:23:28.522 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.780 [2024-07-14 02:10:34.223999] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.780 [2024-07-14 02:10:34.313971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.780 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.780 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:28.780 02:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.392SeEuL9o 00:23:29.038 [2024-07-14 02:10:34.684761] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.038 [2024-07-14 02:10:34.684847] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:29.038 [2024-07-14 02:10:34.684903] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.392SeEuL9o 00:23:29.038 request: 00:23:29.038 { 00:23:29.038 "name": "TLSTEST", 00:23:29.038 "trtype": "tcp", 00:23:29.038 "traddr": "10.0.0.2", 00:23:29.038 "adrfam": "ipv4", 00:23:29.038 "trsvcid": "4420", 00:23:29.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.038 "prchk_reftag": false, 00:23:29.038 "prchk_guard": false, 00:23:29.038 "hdgst": false, 00:23:29.038 "ddgst": false, 00:23:29.038 "psk": "/tmp/tmp.392SeEuL9o", 00:23:29.038 "method": "bdev_nvme_attach_controller", 00:23:29.038 "req_id": 1 00:23:29.038 } 00:23:29.038 Got JSON-RPC error response 00:23:29.038 response: 00:23:29.038 { 00:23:29.038 "code": -1, 00:23:29.038 "message": "Operation not permitted" 00:23:29.038 } 00:23:29.038 02:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1629109 00:23:29.038 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1629109 ']' 00:23:29.038 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1629109 00:23:29.038 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:29.038 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.038 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1629109 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1629109' 00:23:29.296 killing process with pid 1629109 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1629109 00:23:29.296 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.296 00:23:29.296 Latency(us) 00:23:29.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.296 =================================================================================================================== 00:23:29.296 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1629109 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1627627 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1627627 ']' 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1627627 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1627627 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1627627' 00:23:29.296 killing process with pid 1627627 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1627627 00:23:29.296 [2024-07-14 02:10:34.983683] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:29.296 02:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1627627 00:23:29.554 02:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:29.554 02:10:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:29.554 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:29.554 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.554 02:10:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1629254 00:23:29.554 02:10:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:29.554 02:10:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1629254 00:23:29.554 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1629254 ']' 00:23:29.554 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.554 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.554 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.554 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.554 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.813 [2024-07-14 02:10:35.278891] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:29.813 [2024-07-14 02:10:35.278969] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.813 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.813 [2024-07-14 02:10:35.342094] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.813 [2024-07-14 02:10:35.422391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.813 [2024-07-14 02:10:35.422456] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.813 [2024-07-14 02:10:35.422485] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.813 [2024-07-14 02:10:35.422497] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.813 [2024-07-14 02:10:35.422506] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.813 [2024-07-14 02:10:35.422532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.071 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.071 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:30.071 02:10:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.071 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:30.071 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.071 02:10:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.071 02:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.392SeEuL9o 00:23:30.071 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:30.071 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.392SeEuL9o 00:23:30.071 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:30.071 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:30.071 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:30.071 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:30.071 02:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.392SeEuL9o 00:23:30.071 02:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.392SeEuL9o 00:23:30.071 02:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:30.329 [2024-07-14 02:10:35.821346] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.329 02:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:30.586 02:10:36 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:30.843 [2024-07-14 02:10:36.294530] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:30.843 [2024-07-14 02:10:36.294770] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.843 02:10:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:31.101 malloc0 00:23:31.101 02:10:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:31.360 02:10:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.392SeEuL9o 00:23:31.618 [2024-07-14 02:10:37.128973] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:31.618 [2024-07-14 02:10:37.129019] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:31.618 [2024-07-14 02:10:37.129058] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:31.618 request: 00:23:31.618 { 00:23:31.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.618 "host": "nqn.2016-06.io.spdk:host1", 00:23:31.618 "psk": "/tmp/tmp.392SeEuL9o", 00:23:31.618 "method": "nvmf_subsystem_add_host", 00:23:31.618 "req_id": 1 00:23:31.618 } 00:23:31.618 Got JSON-RPC error response 00:23:31.618 response: 00:23:31.618 { 00:23:31.618 "code": -32603, 00:23:31.618 "message": "Internal error" 00:23:31.618 } 00:23:31.618 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:31.618 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:31.618 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:31.618 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:31.618 02:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1629254 00:23:31.618 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1629254 ']' 00:23:31.618 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1629254 00:23:31.618 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:31.618 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:31.618 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1629254 00:23:31.618 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:31.618 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:31.618 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1629254' 00:23:31.618 killing process with pid 1629254 00:23:31.618 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1629254 00:23:31.618 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1629254 00:23:31.877 02:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.392SeEuL9o 00:23:31.877 02:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:31.877 02:10:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:31.877 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:31.877 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.877 02:10:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1629546 00:23:31.877 02:10:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:31.877 02:10:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1629546 00:23:31.877 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1629546 ']' 00:23:31.877 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.877 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:31.877 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.877 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:31.877 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.877 [2024-07-14 02:10:37.457528] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:31.877 [2024-07-14 02:10:37.457626] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.877 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.877 [2024-07-14 02:10:37.521572] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.136 [2024-07-14 02:10:37.606551] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.136 [2024-07-14 02:10:37.606605] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.136 [2024-07-14 02:10:37.606633] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.136 [2024-07-14 02:10:37.606644] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.136 [2024-07-14 02:10:37.606653] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.136 [2024-07-14 02:10:37.606678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.136 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:32.136 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:32.136 02:10:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:32.136 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:32.136 02:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.136 02:10:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.136 02:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.392SeEuL9o 00:23:32.136 02:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.392SeEuL9o 00:23:32.136 02:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:32.394 [2024-07-14 02:10:37.961072] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.394 02:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:32.652 02:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:32.910 [2024-07-14 02:10:38.442376] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.910 [2024-07-14 02:10:38.442595] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.910 02:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:33.169 malloc0 00:23:33.169 02:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:33.427 02:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.392SeEuL9o 00:23:33.685 [2024-07-14 02:10:39.188085] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:33.685 02:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1629826 00:23:33.685 02:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:33.685 02:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:33.685 02:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1629826 /var/tmp/bdevperf.sock 00:23:33.685 02:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1629826 ']' 00:23:33.685 02:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.685 02:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.685 02:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.685 02:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.685 02:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.685 [2024-07-14 02:10:39.251796] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:33.685 [2024-07-14 02:10:39.251907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1629826 ] 00:23:33.685 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.685 [2024-07-14 02:10:39.312687] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.943 [2024-07-14 02:10:39.407638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.943 02:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:33.944 02:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:33.944 02:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.392SeEuL9o 00:23:34.202 [2024-07-14 02:10:39.793543] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.202 [2024-07-14 02:10:39.793684] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:34.202 TLSTESTn1 00:23:34.202 02:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:34.768 02:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:34.768 "subsystems": [ 00:23:34.768 { 00:23:34.768 "subsystem": "keyring", 00:23:34.768 "config": [] 00:23:34.768 }, 00:23:34.768 { 00:23:34.768 "subsystem": "iobuf", 00:23:34.768 "config": [ 00:23:34.768 { 00:23:34.768 "method": "iobuf_set_options", 00:23:34.768 "params": { 00:23:34.768 "small_pool_count": 8192, 00:23:34.768 "large_pool_count": 1024, 00:23:34.768 "small_bufsize": 8192, 00:23:34.768 "large_bufsize": 135168 00:23:34.768 } 00:23:34.768 } 00:23:34.768 ] 00:23:34.768 }, 00:23:34.768 { 00:23:34.769 "subsystem": "sock", 00:23:34.769 "config": [ 00:23:34.769 { 00:23:34.769 "method": "sock_set_default_impl", 00:23:34.769 "params": { 00:23:34.769 "impl_name": "posix" 00:23:34.769 } 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "method": "sock_impl_set_options", 00:23:34.769 "params": { 00:23:34.769 "impl_name": "ssl", 00:23:34.769 "recv_buf_size": 4096, 00:23:34.769 "send_buf_size": 4096, 00:23:34.769 "enable_recv_pipe": true, 00:23:34.769 "enable_quickack": false, 00:23:34.769 "enable_placement_id": 0, 00:23:34.769 "enable_zerocopy_send_server": true, 00:23:34.769 "enable_zerocopy_send_client": false, 00:23:34.769 "zerocopy_threshold": 0, 00:23:34.769 "tls_version": 0, 00:23:34.769 "enable_ktls": false 00:23:34.769 } 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "method": "sock_impl_set_options", 00:23:34.769 "params": { 00:23:34.769 "impl_name": "posix", 00:23:34.769 "recv_buf_size": 2097152, 00:23:34.769 "send_buf_size": 2097152, 00:23:34.769 "enable_recv_pipe": true, 00:23:34.769 "enable_quickack": false, 00:23:34.769 "enable_placement_id": 0, 00:23:34.769 "enable_zerocopy_send_server": true, 00:23:34.769 "enable_zerocopy_send_client": false, 00:23:34.769 "zerocopy_threshold": 0, 00:23:34.769 "tls_version": 0, 00:23:34.769 "enable_ktls": false 00:23:34.769 } 00:23:34.769 } 00:23:34.769 ] 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "subsystem": "vmd", 00:23:34.769 "config": [] 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "subsystem": "accel", 00:23:34.769 "config": [ 00:23:34.769 { 00:23:34.769 "method": "accel_set_options", 00:23:34.769 "params": { 00:23:34.769 "small_cache_size": 128, 00:23:34.769 "large_cache_size": 16, 00:23:34.769 "task_count": 2048, 00:23:34.769 "sequence_count": 2048, 00:23:34.769 "buf_count": 2048 00:23:34.769 } 00:23:34.769 } 00:23:34.769 ] 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "subsystem": "bdev", 00:23:34.769 "config": [ 00:23:34.769 { 00:23:34.769 "method": "bdev_set_options", 00:23:34.769 "params": { 00:23:34.769 "bdev_io_pool_size": 65535, 00:23:34.769 "bdev_io_cache_size": 256, 00:23:34.769 "bdev_auto_examine": true, 00:23:34.769 "iobuf_small_cache_size": 128, 00:23:34.769 "iobuf_large_cache_size": 16 00:23:34.769 } 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "method": "bdev_raid_set_options", 00:23:34.769 "params": { 00:23:34.769 "process_window_size_kb": 1024 00:23:34.769 } 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "method": "bdev_iscsi_set_options", 00:23:34.769 "params": { 00:23:34.769 "timeout_sec": 30 00:23:34.769 } 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "method": "bdev_nvme_set_options", 00:23:34.769 "params": { 00:23:34.769 "action_on_timeout": "none", 00:23:34.769 "timeout_us": 0, 00:23:34.769 "timeout_admin_us": 0, 00:23:34.769 "keep_alive_timeout_ms": 10000, 00:23:34.769 "arbitration_burst": 0, 00:23:34.769 "low_priority_weight": 0, 00:23:34.769 "medium_priority_weight": 0, 00:23:34.769 "high_priority_weight": 0, 00:23:34.769 "nvme_adminq_poll_period_us": 10000, 00:23:34.769 "nvme_ioq_poll_period_us": 0, 00:23:34.769 "io_queue_requests": 0, 00:23:34.769 "delay_cmd_submit": true, 00:23:34.769 "transport_retry_count": 4, 00:23:34.769 "bdev_retry_count": 3, 00:23:34.769 "transport_ack_timeout": 0, 00:23:34.769 "ctrlr_loss_timeout_sec": 0, 00:23:34.769 "reconnect_delay_sec": 0, 00:23:34.769 "fast_io_fail_timeout_sec": 0, 00:23:34.769 "disable_auto_failback": false, 00:23:34.769 "generate_uuids": false, 00:23:34.769 "transport_tos": 0, 00:23:34.769 "nvme_error_stat": false, 00:23:34.769 "rdma_srq_size": 0, 00:23:34.769 "io_path_stat": false, 00:23:34.769 "allow_accel_sequence": false, 00:23:34.769 "rdma_max_cq_size": 0, 00:23:34.769 "rdma_cm_event_timeout_ms": 0, 00:23:34.769 "dhchap_digests": [ 00:23:34.769 "sha256", 00:23:34.769 "sha384", 00:23:34.769 "sha512" 00:23:34.769 ], 00:23:34.769 "dhchap_dhgroups": [ 00:23:34.769 "null", 00:23:34.769 "ffdhe2048", 00:23:34.769 "ffdhe3072", 00:23:34.769 "ffdhe4096", 00:23:34.769 "ffdhe6144", 00:23:34.769 "ffdhe8192" 00:23:34.769 ] 00:23:34.769 } 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "method": "bdev_nvme_set_hotplug", 00:23:34.769 "params": { 00:23:34.769 "period_us": 100000, 00:23:34.769 "enable": false 00:23:34.769 } 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "method": "bdev_malloc_create", 00:23:34.769 "params": { 00:23:34.769 "name": "malloc0", 00:23:34.769 "num_blocks": 8192, 00:23:34.769 "block_size": 4096, 00:23:34.769 "physical_block_size": 4096, 00:23:34.769 "uuid": "68f3232f-a584-4f51-a378-ca18d2dffec8", 00:23:34.769 "optimal_io_boundary": 0 00:23:34.769 } 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "method": "bdev_wait_for_examine" 00:23:34.769 } 00:23:34.769 ] 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "subsystem": "nbd", 00:23:34.769 "config": [] 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "subsystem": "scheduler", 00:23:34.769 "config": [ 00:23:34.769 { 00:23:34.769 "method": "framework_set_scheduler", 00:23:34.769 "params": { 00:23:34.769 "name": "static" 00:23:34.769 } 00:23:34.769 } 00:23:34.769 ] 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "subsystem": "nvmf", 00:23:34.769 "config": [ 00:23:34.769 { 00:23:34.769 "method": "nvmf_set_config", 00:23:34.769 "params": { 00:23:34.769 "discovery_filter": "match_any", 00:23:34.769 "admin_cmd_passthru": { 00:23:34.769 "identify_ctrlr": false 00:23:34.769 } 00:23:34.769 } 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "method": "nvmf_set_max_subsystems", 00:23:34.769 "params": { 00:23:34.769 "max_subsystems": 1024 00:23:34.769 } 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "method": "nvmf_set_crdt", 00:23:34.769 "params": { 00:23:34.769 "crdt1": 0, 00:23:34.769 "crdt2": 0, 00:23:34.769 "crdt3": 0 00:23:34.769 } 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "method": "nvmf_create_transport", 00:23:34.769 "params": { 00:23:34.769 "trtype": "TCP", 00:23:34.769 "max_queue_depth": 128, 00:23:34.769 "max_io_qpairs_per_ctrlr": 127, 00:23:34.769 "in_capsule_data_size": 4096, 00:23:34.769 "max_io_size": 131072, 00:23:34.769 "io_unit_size": 131072, 00:23:34.769 "max_aq_depth": 128, 00:23:34.769 "num_shared_buffers": 511, 00:23:34.769 "buf_cache_size": 4294967295, 00:23:34.769 "dif_insert_or_strip": false, 00:23:34.769 "zcopy": false, 00:23:34.769 "c2h_success": false, 00:23:34.769 "sock_priority": 0, 00:23:34.769 "abort_timeout_sec": 1, 00:23:34.769 "ack_timeout": 0, 00:23:34.769 "data_wr_pool_size": 0 00:23:34.769 } 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "method": "nvmf_create_subsystem", 00:23:34.769 "params": { 00:23:34.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.769 "allow_any_host": false, 00:23:34.769 "serial_number": "SPDK00000000000001", 00:23:34.769 "model_number": "SPDK bdev Controller", 00:23:34.769 "max_namespaces": 10, 00:23:34.769 "min_cntlid": 1, 00:23:34.769 "max_cntlid": 65519, 00:23:34.769 "ana_reporting": false 00:23:34.769 } 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "method": "nvmf_subsystem_add_host", 00:23:34.769 "params": { 00:23:34.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.769 "host": "nqn.2016-06.io.spdk:host1", 00:23:34.769 "psk": "/tmp/tmp.392SeEuL9o" 00:23:34.769 } 00:23:34.769 }, 00:23:34.769 { 00:23:34.769 "method": "nvmf_subsystem_add_ns", 00:23:34.769 "params": { 00:23:34.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.769 "namespace": { 00:23:34.769 "nsid": 1, 00:23:34.769 "bdev_name": "malloc0", 00:23:34.769 "nguid": "68F3232FA5844F51A378CA18D2DFFEC8", 00:23:34.769 "uuid": "68f3232f-a584-4f51-a378-ca18d2dffec8", 00:23:34.769 "no_auto_visible": false 00:23:34.769 } 00:23:34.769 } 00:23:34.769 }, 00:23:34.770 { 00:23:34.770 "method": "nvmf_subsystem_add_listener", 00:23:34.770 "params": { 00:23:34.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.770 "listen_address": { 00:23:34.770 "trtype": "TCP", 00:23:34.770 "adrfam": "IPv4", 00:23:34.770 "traddr": "10.0.0.2", 00:23:34.770 "trsvcid": "4420" 00:23:34.770 }, 00:23:34.770 "secure_channel": true 00:23:34.770 } 00:23:34.770 } 00:23:34.770 ] 00:23:34.770 } 00:23:34.770 ] 00:23:34.770 }' 00:23:34.770 02:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:35.029 02:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:35.029 "subsystems": [ 00:23:35.029 { 00:23:35.029 "subsystem": "keyring", 00:23:35.029 "config": [] 00:23:35.029 }, 00:23:35.029 { 00:23:35.029 "subsystem": "iobuf", 00:23:35.029 "config": [ 00:23:35.029 { 00:23:35.029 "method": "iobuf_set_options", 00:23:35.029 "params": { 00:23:35.029 "small_pool_count": 8192, 00:23:35.029 "large_pool_count": 1024, 00:23:35.029 "small_bufsize": 8192, 00:23:35.029 "large_bufsize": 135168 00:23:35.029 } 00:23:35.029 } 00:23:35.029 ] 00:23:35.029 }, 00:23:35.029 { 00:23:35.029 "subsystem": "sock", 00:23:35.029 "config": [ 00:23:35.029 { 00:23:35.029 "method": "sock_set_default_impl", 00:23:35.029 "params": { 00:23:35.029 "impl_name": "posix" 00:23:35.029 } 00:23:35.029 }, 00:23:35.029 { 00:23:35.029 "method": "sock_impl_set_options", 00:23:35.029 "params": { 00:23:35.029 "impl_name": "ssl", 00:23:35.029 "recv_buf_size": 4096, 00:23:35.029 "send_buf_size": 4096, 00:23:35.029 "enable_recv_pipe": true, 00:23:35.029 "enable_quickack": false, 00:23:35.029 "enable_placement_id": 0, 00:23:35.029 "enable_zerocopy_send_server": true, 00:23:35.029 "enable_zerocopy_send_client": false, 00:23:35.029 "zerocopy_threshold": 0, 00:23:35.029 "tls_version": 0, 00:23:35.029 "enable_ktls": false 00:23:35.029 } 00:23:35.029 }, 00:23:35.029 { 00:23:35.029 "method": "sock_impl_set_options", 00:23:35.029 "params": { 00:23:35.029 "impl_name": "posix", 00:23:35.029 "recv_buf_size": 2097152, 00:23:35.029 "send_buf_size": 2097152, 00:23:35.029 "enable_recv_pipe": true, 00:23:35.029 "enable_quickack": false, 00:23:35.029 "enable_placement_id": 0, 00:23:35.029 "enable_zerocopy_send_server": true, 00:23:35.029 "enable_zerocopy_send_client": false, 00:23:35.029 "zerocopy_threshold": 0, 00:23:35.029 "tls_version": 0, 00:23:35.029 "enable_ktls": false 00:23:35.029 } 00:23:35.029 } 00:23:35.029 ] 00:23:35.029 }, 00:23:35.029 { 00:23:35.029 "subsystem": "vmd", 00:23:35.029 "config": [] 00:23:35.029 }, 00:23:35.029 { 00:23:35.029 "subsystem": "accel", 00:23:35.029 "config": [ 00:23:35.029 { 00:23:35.029 "method": "accel_set_options", 00:23:35.029 "params": { 00:23:35.029 "small_cache_size": 128, 00:23:35.029 "large_cache_size": 16, 00:23:35.029 "task_count": 2048, 00:23:35.029 "sequence_count": 2048, 00:23:35.029 "buf_count": 2048 00:23:35.029 } 00:23:35.029 } 00:23:35.029 ] 00:23:35.029 }, 00:23:35.029 { 00:23:35.029 "subsystem": "bdev", 00:23:35.029 "config": [ 00:23:35.029 { 00:23:35.029 "method": "bdev_set_options", 00:23:35.029 "params": { 00:23:35.029 "bdev_io_pool_size": 65535, 00:23:35.029 "bdev_io_cache_size": 256, 00:23:35.029 "bdev_auto_examine": true, 00:23:35.029 "iobuf_small_cache_size": 128, 00:23:35.029 "iobuf_large_cache_size": 16 00:23:35.029 } 00:23:35.029 }, 00:23:35.029 { 00:23:35.030 "method": "bdev_raid_set_options", 00:23:35.030 "params": { 00:23:35.030 "process_window_size_kb": 1024 00:23:35.030 } 00:23:35.030 }, 00:23:35.030 { 00:23:35.030 "method": "bdev_iscsi_set_options", 00:23:35.030 "params": { 00:23:35.030 "timeout_sec": 30 00:23:35.030 } 00:23:35.030 }, 00:23:35.030 { 00:23:35.030 "method": "bdev_nvme_set_options", 00:23:35.030 "params": { 00:23:35.030 "action_on_timeout": "none", 00:23:35.030 "timeout_us": 0, 00:23:35.030 "timeout_admin_us": 0, 00:23:35.030 "keep_alive_timeout_ms": 10000, 00:23:35.030 "arbitration_burst": 0, 00:23:35.030 "low_priority_weight": 0, 00:23:35.030 "medium_priority_weight": 0, 00:23:35.030 "high_priority_weight": 0, 00:23:35.030 "nvme_adminq_poll_period_us": 10000, 00:23:35.030 "nvme_ioq_poll_period_us": 0, 00:23:35.030 "io_queue_requests": 512, 00:23:35.030 "delay_cmd_submit": true, 00:23:35.030 "transport_retry_count": 4, 00:23:35.030 "bdev_retry_count": 3, 00:23:35.030 "transport_ack_timeout": 0, 00:23:35.030 "ctrlr_loss_timeout_sec": 0, 00:23:35.030 "reconnect_delay_sec": 0, 00:23:35.030 "fast_io_fail_timeout_sec": 0, 00:23:35.030 "disable_auto_failback": false, 00:23:35.030 "generate_uuids": false, 00:23:35.030 "transport_tos": 0, 00:23:35.030 "nvme_error_stat": false, 00:23:35.030 "rdma_srq_size": 0, 00:23:35.030 "io_path_stat": false, 00:23:35.030 "allow_accel_sequence": false, 00:23:35.030 "rdma_max_cq_size": 0, 00:23:35.030 "rdma_cm_event_timeout_ms": 0, 00:23:35.030 "dhchap_digests": [ 00:23:35.030 "sha256", 00:23:35.030 "sha384", 00:23:35.030 "sha512" 00:23:35.030 ], 00:23:35.030 "dhchap_dhgroups": [ 00:23:35.030 "null", 00:23:35.030 "ffdhe2048", 00:23:35.030 "ffdhe3072", 00:23:35.030 "ffdhe4096", 00:23:35.030 "ffdhe6144", 00:23:35.030 "ffdhe8192" 00:23:35.030 ] 00:23:35.030 } 00:23:35.030 }, 00:23:35.030 { 00:23:35.030 "method": "bdev_nvme_attach_controller", 00:23:35.030 "params": { 00:23:35.030 "name": "TLSTEST", 00:23:35.030 "trtype": "TCP", 00:23:35.030 "adrfam": "IPv4", 00:23:35.030 "traddr": "10.0.0.2", 00:23:35.030 "trsvcid": "4420", 00:23:35.030 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.030 "prchk_reftag": false, 00:23:35.030 "prchk_guard": false, 00:23:35.030 "ctrlr_loss_timeout_sec": 0, 00:23:35.030 "reconnect_delay_sec": 0, 00:23:35.030 "fast_io_fail_timeout_sec": 0, 00:23:35.030 "psk": "/tmp/tmp.392SeEuL9o", 00:23:35.030 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.030 "hdgst": false, 00:23:35.030 "ddgst": false 00:23:35.030 } 00:23:35.030 }, 00:23:35.030 { 00:23:35.030 "method": "bdev_nvme_set_hotplug", 00:23:35.030 "params": { 00:23:35.030 "period_us": 100000, 00:23:35.030 "enable": false 00:23:35.030 } 00:23:35.030 }, 00:23:35.030 { 00:23:35.030 "method": "bdev_wait_for_examine" 00:23:35.030 } 00:23:35.030 ] 00:23:35.030 }, 00:23:35.030 { 00:23:35.030 "subsystem": "nbd", 00:23:35.030 "config": [] 00:23:35.030 } 00:23:35.030 ] 00:23:35.030 }' 00:23:35.030 02:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1629826 00:23:35.030 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1629826 ']' 00:23:35.030 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1629826 00:23:35.030 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:35.030 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:35.030 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1629826 00:23:35.030 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:35.030 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:35.030 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1629826' 00:23:35.030 killing process with pid 1629826 00:23:35.030 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1629826 00:23:35.030 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.030 00:23:35.030 Latency(us) 00:23:35.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.030 =================================================================================================================== 00:23:35.030 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:35.030 [2024-07-14 02:10:40.591303] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:35.030 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1629826 00:23:35.289 02:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1629546 00:23:35.289 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1629546 ']' 00:23:35.289 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1629546 00:23:35.289 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:35.289 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:35.289 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1629546 00:23:35.289 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:35.289 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:35.289 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1629546' 00:23:35.289 killing process with pid 1629546 00:23:35.289 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1629546 00:23:35.289 [2024-07-14 02:10:40.847469] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:35.289 02:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1629546 00:23:35.548 02:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:35.549 02:10:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:35.549 02:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:35.549 "subsystems": [ 00:23:35.549 { 00:23:35.549 "subsystem": "keyring", 00:23:35.549 "config": [] 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "subsystem": "iobuf", 00:23:35.549 "config": [ 00:23:35.549 { 00:23:35.549 "method": "iobuf_set_options", 00:23:35.549 "params": { 00:23:35.549 "small_pool_count": 8192, 00:23:35.549 "large_pool_count": 1024, 00:23:35.549 "small_bufsize": 8192, 00:23:35.549 "large_bufsize": 135168 00:23:35.549 } 00:23:35.549 } 00:23:35.549 ] 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "subsystem": "sock", 00:23:35.549 "config": [ 00:23:35.549 { 00:23:35.549 "method": "sock_set_default_impl", 00:23:35.549 "params": { 00:23:35.549 "impl_name": "posix" 00:23:35.549 } 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "method": "sock_impl_set_options", 00:23:35.549 "params": { 00:23:35.549 "impl_name": "ssl", 00:23:35.549 "recv_buf_size": 4096, 00:23:35.549 "send_buf_size": 4096, 00:23:35.549 "enable_recv_pipe": true, 00:23:35.549 "enable_quickack": false, 00:23:35.549 "enable_placement_id": 0, 00:23:35.549 "enable_zerocopy_send_server": true, 00:23:35.549 "enable_zerocopy_send_client": false, 00:23:35.549 "zerocopy_threshold": 0, 00:23:35.549 "tls_version": 0, 00:23:35.549 "enable_ktls": false 00:23:35.549 } 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "method": "sock_impl_set_options", 00:23:35.549 "params": { 00:23:35.549 "impl_name": "posix", 00:23:35.549 "recv_buf_size": 2097152, 00:23:35.549 "send_buf_size": 2097152, 00:23:35.549 "enable_recv_pipe": true, 00:23:35.549 "enable_quickack": false, 00:23:35.549 "enable_placement_id": 0, 00:23:35.549 "enable_zerocopy_send_server": true, 00:23:35.549 "enable_zerocopy_send_client": false, 00:23:35.549 "zerocopy_threshold": 0, 00:23:35.549 "tls_version": 0, 00:23:35.549 "enable_ktls": false 00:23:35.549 } 00:23:35.549 } 00:23:35.549 ] 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "subsystem": "vmd", 00:23:35.549 "config": [] 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "subsystem": "accel", 00:23:35.549 "config": [ 00:23:35.549 { 00:23:35.549 "method": "accel_set_options", 00:23:35.549 "params": { 00:23:35.549 "small_cache_size": 128, 00:23:35.549 "large_cache_size": 16, 00:23:35.549 "task_count": 2048, 00:23:35.549 "sequence_count": 2048, 00:23:35.549 "buf_count": 2048 00:23:35.549 } 00:23:35.549 } 00:23:35.549 ] 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "subsystem": "bdev", 00:23:35.549 "config": [ 00:23:35.549 { 00:23:35.549 "method": "bdev_set_options", 00:23:35.549 "params": { 00:23:35.549 "bdev_io_pool_size": 65535, 00:23:35.549 "bdev_io_cache_size": 256, 00:23:35.549 "bdev_auto_examine": true, 00:23:35.549 "iobuf_small_cache_size": 128, 00:23:35.549 "iobuf_large_cache_size": 16 00:23:35.549 } 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "method": "bdev_raid_set_options", 00:23:35.549 "params": { 00:23:35.549 "process_window_size_kb": 1024 00:23:35.549 } 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "method": "bdev_iscsi_set_options", 00:23:35.549 "params": { 00:23:35.549 "timeout_sec": 30 00:23:35.549 } 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "method": "bdev_nvme_set_options", 00:23:35.549 "params": { 00:23:35.549 "action_on_timeout": "none", 00:23:35.549 "timeout_us": 0, 00:23:35.549 "timeout_admin_us": 0, 00:23:35.549 "keep_alive_timeout_ms": 10000, 00:23:35.549 "arbitration_burst": 0, 00:23:35.549 "low_priority_weight": 0, 00:23:35.549 "medium_priority_weight": 0, 00:23:35.549 "high_priority_weight": 0, 00:23:35.549 "nvme_adminq_poll_period_us": 10000, 00:23:35.549 "nvme_ioq_poll_period_us": 0, 00:23:35.549 "io_queue_requests": 0, 00:23:35.549 "delay_cmd_submit": true, 00:23:35.549 "transport_retry_count": 4, 00:23:35.549 "bdev_retry_count": 3, 00:23:35.549 "transport_ack_timeout": 0, 00:23:35.549 "ctrlr_loss_timeout_sec": 0, 00:23:35.549 "reconnect_delay_sec": 0, 00:23:35.549 "fast_io_fail_timeout_sec": 0, 00:23:35.549 "disable_auto_failback": false, 00:23:35.549 "generate_uuids": false, 00:23:35.549 "transport_tos": 0, 00:23:35.549 "nvme_error_stat": false, 00:23:35.549 "rdma_srq_size": 0, 00:23:35.549 "io_path_stat": false, 00:23:35.549 "allow_accel_sequence": false, 00:23:35.549 "rdma_max_cq_size": 0, 00:23:35.549 "rdma_cm_event_timeout_ms": 0, 00:23:35.549 "dhchap_digests": [ 00:23:35.549 "sha256", 00:23:35.549 "sha384", 00:23:35.549 "sha512" 00:23:35.549 ], 00:23:35.549 "dhchap_dhgroups": [ 00:23:35.549 "null", 00:23:35.549 "ffdhe2048", 00:23:35.549 "ffdhe3072", 00:23:35.549 "ffdhe4096", 00:23:35.549 "ffdhe 02:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:35.549 6144", 00:23:35.549 "ffdhe8192" 00:23:35.549 ] 00:23:35.549 } 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "method": "bdev_nvme_set_hotplug", 00:23:35.549 "params": { 00:23:35.549 "period_us": 100000, 00:23:35.549 "enable": false 00:23:35.549 } 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "method": "bdev_malloc_create", 00:23:35.549 "params": { 00:23:35.549 "name": "malloc0", 00:23:35.549 "num_blocks": 8192, 00:23:35.549 "block_size": 4096, 00:23:35.549 "physical_block_size": 4096, 00:23:35.549 "uuid": "68f3232f-a584-4f51-a378-ca18d2dffec8", 00:23:35.549 "optimal_io_boundary": 0 00:23:35.549 } 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "method": "bdev_wait_for_examine" 00:23:35.549 } 00:23:35.549 ] 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "subsystem": "nbd", 00:23:35.549 "config": [] 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "subsystem": "scheduler", 00:23:35.549 "config": [ 00:23:35.549 { 00:23:35.549 "method": "framework_set_scheduler", 00:23:35.549 "params": { 00:23:35.549 "name": "static" 00:23:35.549 } 00:23:35.549 } 00:23:35.549 ] 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "subsystem": "nvmf", 00:23:35.549 "config": [ 00:23:35.549 { 00:23:35.549 "method": "nvmf_set_config", 00:23:35.549 "params": { 00:23:35.549 "discovery_filter": "match_any", 00:23:35.549 "admin_cmd_passthru": { 00:23:35.549 "identify_ctrlr": false 00:23:35.549 } 00:23:35.549 } 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "method": "nvmf_set_max_subsystems", 00:23:35.549 "params": { 00:23:35.549 "max_subsystems": 1024 00:23:35.549 } 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "method": "nvmf_set_crdt", 00:23:35.549 "params": { 00:23:35.549 "crdt1": 0, 00:23:35.549 "crdt2": 0, 00:23:35.549 "crdt3": 0 00:23:35.549 } 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "method": "nvmf_create_transport", 00:23:35.549 "params": { 00:23:35.549 "trtype": "TCP", 00:23:35.549 "max_queue_depth": 128, 00:23:35.549 "max_io_qpairs_per_ctrlr": 127, 00:23:35.549 "in_capsule_data_size": 4096, 00:23:35.549 "max_io_size": 131072, 00:23:35.549 "io_unit_size": 131072, 00:23:35.549 "max_aq_depth": 128, 00:23:35.549 "num_shared_buffers": 511, 00:23:35.549 "buf_cache_size": 4294967295, 00:23:35.549 "dif_insert_or_strip": false, 00:23:35.549 "zcopy": false, 00:23:35.549 "c2h_success": false, 00:23:35.549 "sock_priority": 0, 00:23:35.549 "abort_timeout_sec": 1, 00:23:35.549 "ack_timeout": 0, 00:23:35.549 "data_wr_pool_size": 0 00:23:35.549 } 00:23:35.549 }, 00:23:35.549 { 00:23:35.549 "method": "nvmf_create_subsystem", 00:23:35.549 "params": { 00:23:35.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.549 "allow_any_host": false, 00:23:35.549 "serial_number": "SPDK00000000000001", 00:23:35.549 "model_number": "SPDK bdev Controller", 00:23:35.549 "max_namespaces": 10, 00:23:35.549 "min_cntlid": 1, 00:23:35.549 "max_cntlid": 65519, 00:23:35.549 "ana_reporting": false 00:23:35.549 } 00:23:35.549 }, 00:23:35.550 { 00:23:35.550 "method": "nvmf_subsystem_add_host", 00:23:35.550 "params": { 00:23:35.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.550 "host": "nqn.2016-06.io.spdk:host1", 00:23:35.550 "psk": "/tmp/tmp.392SeEuL9o" 00:23:35.550 } 00:23:35.550 }, 00:23:35.550 { 00:23:35.550 "method": "nvmf_subsystem_add_ns", 00:23:35.550 "params": { 00:23:35.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.550 "namespace": { 00:23:35.550 "nsid": 1, 00:23:35.550 "bdev_name": "malloc0", 00:23:35.550 "nguid": "68F3232FA5844F51A378CA18D2DFFEC8", 00:23:35.550 "uuid": "68f3232f-a584-4f51-a378-ca18d2dffec8", 00:23:35.550 "no_auto_visible": false 00:23:35.550 } 00:23:35.550 } 00:23:35.550 }, 00:23:35.550 { 00:23:35.550 "method": "nvmf_subsystem_add_listener", 00:23:35.550 "params": { 00:23:35.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.550 "listen_address": { 00:23:35.550 "trtype": "TCP", 00:23:35.550 "adrfam": "IPv4", 00:23:35.550 "traddr": "10.0.0.2", 00:23:35.550 "trsvcid": "4420" 00:23:35.550 }, 00:23:35.550 "secure_channel": true 00:23:35.550 } 00:23:35.550 } 00:23:35.550 ] 00:23:35.550 } 00:23:35.550 ] 00:23:35.550 }' 00:23:35.550 02:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.550 02:10:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1629986 00:23:35.550 02:10:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:35.550 02:10:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1629986 00:23:35.550 02:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1629986 ']' 00:23:35.550 02:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.550 02:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.550 02:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.550 02:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.550 02:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.550 [2024-07-14 02:10:41.147005] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:35.550 [2024-07-14 02:10:41.147095] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.550 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.550 [2024-07-14 02:10:41.221702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.809 [2024-07-14 02:10:41.313579] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.809 [2024-07-14 02:10:41.313638] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.809 [2024-07-14 02:10:41.313668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.809 [2024-07-14 02:10:41.313681] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.809 [2024-07-14 02:10:41.313690] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.809 [2024-07-14 02:10:41.313775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.067 [2024-07-14 02:10:41.538275] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.067 [2024-07-14 02:10:41.554266] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:36.067 [2024-07-14 02:10:41.570316] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.067 [2024-07-14 02:10:41.586039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.636 02:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.636 02:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:36.636 02:10:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:36.636 02:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:36.636 02:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.636 02:10:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.636 02:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1630134 00:23:36.636 02:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1630134 /var/tmp/bdevperf.sock 00:23:36.636 02:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1630134 ']' 00:23:36.636 02:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.636 02:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:36.636 02:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.636 02:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:36.636 "subsystems": [ 00:23:36.636 { 00:23:36.636 "subsystem": "keyring", 00:23:36.636 "config": [] 00:23:36.636 }, 00:23:36.636 { 00:23:36.636 "subsystem": "iobuf", 00:23:36.636 "config": [ 00:23:36.636 { 00:23:36.636 "method": "iobuf_set_options", 00:23:36.636 "params": { 00:23:36.636 "small_pool_count": 8192, 00:23:36.636 "large_pool_count": 1024, 00:23:36.636 "small_bufsize": 8192, 00:23:36.636 "large_bufsize": 135168 00:23:36.636 } 00:23:36.636 } 00:23:36.636 ] 00:23:36.636 }, 00:23:36.636 { 00:23:36.636 "subsystem": "sock", 00:23:36.636 "config": [ 00:23:36.636 { 00:23:36.636 "method": "sock_set_default_impl", 00:23:36.636 "params": { 00:23:36.636 "impl_name": "posix" 00:23:36.636 } 00:23:36.636 }, 00:23:36.636 { 00:23:36.636 "method": "sock_impl_set_options", 00:23:36.636 "params": { 00:23:36.636 "impl_name": "ssl", 00:23:36.636 "recv_buf_size": 4096, 00:23:36.636 "send_buf_size": 4096, 00:23:36.636 "enable_recv_pipe": true, 00:23:36.636 "enable_quickack": false, 00:23:36.636 "enable_placement_id": 0, 00:23:36.636 "enable_zerocopy_send_server": true, 00:23:36.636 "enable_zerocopy_send_client": false, 00:23:36.636 "zerocopy_threshold": 0, 00:23:36.636 "tls_version": 0, 00:23:36.636 "enable_ktls": false 00:23:36.636 } 00:23:36.636 }, 00:23:36.636 { 00:23:36.636 "method": "sock_impl_set_options", 00:23:36.636 "params": { 00:23:36.636 "impl_name": "posix", 00:23:36.636 "recv_buf_size": 2097152, 00:23:36.636 "send_buf_size": 2097152, 00:23:36.636 "enable_recv_pipe": true, 00:23:36.636 "enable_quickack": false, 00:23:36.636 "enable_placement_id": 0, 00:23:36.636 "enable_zerocopy_send_server": true, 00:23:36.636 "enable_zerocopy_send_client": false, 00:23:36.636 "zerocopy_threshold": 0, 00:23:36.636 "tls_version": 0, 00:23:36.636 "enable_ktls": false 00:23:36.636 } 00:23:36.636 } 00:23:36.636 ] 00:23:36.636 }, 00:23:36.636 { 00:23:36.636 "subsystem": "vmd", 00:23:36.636 "config": [] 00:23:36.636 }, 00:23:36.636 { 00:23:36.636 "subsystem": "accel", 00:23:36.636 "config": [ 00:23:36.636 { 00:23:36.636 "method": "accel_set_options", 00:23:36.636 "params": { 00:23:36.636 "small_cache_size": 128, 00:23:36.636 "large_cache_size": 16, 00:23:36.636 "task_count": 2048, 00:23:36.636 "sequence_count": 2048, 00:23:36.636 "buf_count": 2048 00:23:36.636 } 00:23:36.636 } 00:23:36.636 ] 00:23:36.636 }, 00:23:36.636 { 00:23:36.636 "subsystem": "bdev", 00:23:36.636 "config": [ 00:23:36.636 { 00:23:36.636 "method": "bdev_set_options", 00:23:36.636 "params": { 00:23:36.636 "bdev_io_pool_size": 65535, 00:23:36.636 "bdev_io_cache_size": 256, 00:23:36.636 "bdev_auto_examine": true, 00:23:36.636 "iobuf_small_cache_size": 128, 00:23:36.636 "iobuf_large_cache_size": 16 00:23:36.636 } 00:23:36.636 }, 00:23:36.636 { 00:23:36.636 "method": "bdev_raid_set_options", 00:23:36.636 "params": { 00:23:36.636 "process_window_size_kb": 1024 00:23:36.636 } 00:23:36.636 }, 00:23:36.636 { 00:23:36.636 "method": "bdev_iscsi_set_options", 00:23:36.636 "params": { 00:23:36.636 "timeout_sec": 30 00:23:36.636 } 00:23:36.636 }, 00:23:36.636 { 00:23:36.636 "method": "bdev_nvme_set_options", 00:23:36.636 "params": { 00:23:36.636 "action_on_timeout": "none", 00:23:36.636 "timeout_us": 0, 00:23:36.636 "timeout_admin_us": 0, 00:23:36.636 "keep_alive_timeout_ms": 10000, 00:23:36.636 "arbitration_burst": 0, 00:23:36.636 "low_priority_weight": 0, 00:23:36.636 "medium_priority_weight": 0, 00:23:36.636 "high_priority_weight": 0, 00:23:36.636 "nvme_adminq_poll_period_us": 10000, 00:23:36.636 "nvme_ioq_poll_period_us": 0, 00:23:36.636 "io_queue_requests": 512, 00:23:36.636 "delay_cmd_submit": true, 00:23:36.636 "transport_retry_count": 4, 00:23:36.636 "bdev_retry_count": 3, 00:23:36.636 "transport_ack_timeout": 0, 00:23:36.636 "ctrlr_loss_timeout_sec": 0, 00:23:36.636 "reconnect_delay_sec": 0, 00:23:36.636 "fast_io_fail_timeout_sec": 0, 00:23:36.636 "disable_auto_failback": false, 00:23:36.636 "generate_uuids": false, 00:23:36.636 "transport_tos": 0, 00:23:36.636 "nvme_error_stat": false, 00:23:36.636 "rdma_srq_size": 0, 00:23:36.636 "io_path_stat": false, 00:23:36.636 "allow_accel_sequence": false, 00:23:36.636 "rdma_max_cq_size": 0, 00:23:36.636 "rdma_cm_event_timeout_ms": 0, 00:23:36.636 "dhchap_digests": [ 00:23:36.636 "sha256", 00:23:36.636 "sha384", 00:23:36.636 "sha512" 00:23:36.636 ], 00:23:36.636 "dhchap_dhgroups": [ 00:23:36.636 "null", 00:23:36.636 "ffdhe2048", 00:23:36.636 "ffdhe3072", 00:23:36.636 "ffdhe4096", 00:23:36.636 "ffdhe6144", 00:23:36.636 "ffdhe8192" 00:23:36.636 ] 00:23:36.636 } 00:23:36.636 }, 00:23:36.636 { 00:23:36.636 "method": "bdev_nvme_attach_controller", 00:23:36.636 "params": { 00:23:36.636 "name": "TLSTEST", 00:23:36.636 "trtype": "TCP", 00:23:36.636 "adrfam": "IPv4", 00:23:36.636 "traddr": "10.0.0.2", 00:23:36.636 "trsvcid": "4420", 00:23:36.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.636 "prchk_reftag": false, 00:23:36.636 "prchk_guard": false, 00:23:36.636 "ctrlr_loss_timeout_sec": 0, 00:23:36.636 "reconnect_delay_sec": 0, 00:23:36.636 "fast_io_fail_timeout_sec": 0, 00:23:36.636 "psk": "/tmp/tmp.392SeEuL9o", 00:23:36.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.636 "hdgst": false, 00:23:36.636 "ddgst": false 00:23:36.636 } 00:23:36.636 }, 00:23:36.636 { 00:23:36.636 "method": "bdev_nvme_set_hotplug", 00:23:36.636 "params": { 00:23:36.636 "period_us": 100000, 00:23:36.636 "enable": false 00:23:36.636 } 00:23:36.637 }, 00:23:36.637 { 00:23:36.637 "method": "bdev_wait_for_examine" 00:23:36.637 } 00:23:36.637 ] 00:23:36.637 }, 00:23:36.637 { 00:23:36.637 "subsystem": "nbd", 00:23:36.637 "config": [] 00:23:36.637 } 00:23:36.637 ] 00:23:36.637 }' 00:23:36.637 02:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.637 02:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.637 02:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.637 [2024-07-14 02:10:42.206929] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:36.637 [2024-07-14 02:10:42.207003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1630134 ] 00:23:36.637 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.637 [2024-07-14 02:10:42.264375] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.954 [2024-07-14 02:10:42.349528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.954 [2024-07-14 02:10:42.516168] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.954 [2024-07-14 02:10:42.516324] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:37.520 02:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:37.520 02:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:37.520 02:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:37.776 Running I/O for 10 seconds... 00:23:47.737 00:23:47.737 Latency(us) 00:23:47.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.737 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:47.737 Verification LBA range: start 0x0 length 0x2000 00:23:47.737 TLSTESTn1 : 10.07 1824.15 7.13 0.00 0.00 69952.73 7184.69 103304.15 00:23:47.737 =================================================================================================================== 00:23:47.737 Total : 1824.15 7.13 0.00 0.00 69952.73 7184.69 103304.15 00:23:47.737 0 00:23:47.737 02:10:53 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:47.737 02:10:53 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1630134 00:23:47.737 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1630134 ']' 00:23:47.737 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1630134 00:23:47.737 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:47.737 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.737 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1630134 00:23:47.737 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:47.737 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:47.737 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1630134' 00:23:47.737 killing process with pid 1630134 00:23:47.737 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1630134 00:23:47.737 Received shutdown signal, test time was about 10.000000 seconds 00:23:47.737 00:23:47.737 Latency(us) 00:23:47.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.737 =================================================================================================================== 00:23:47.737 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:47.737 [2024-07-14 02:10:53.366551] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:47.737 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1630134 00:23:47.994 02:10:53 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1629986 00:23:47.994 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1629986 ']' 00:23:47.994 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1629986 00:23:47.994 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:47.994 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.994 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1629986 00:23:47.994 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:47.994 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:47.994 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1629986' 00:23:47.994 killing process with pid 1629986 00:23:47.994 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1629986 00:23:47.994 [2024-07-14 02:10:53.620334] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:47.994 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1629986 00:23:48.252 02:10:53 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:48.252 02:10:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:48.253 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:48.253 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.253 02:10:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1631468 00:23:48.253 02:10:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:48.253 02:10:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1631468 00:23:48.253 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1631468 ']' 00:23:48.253 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.253 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:48.253 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.253 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:48.253 02:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.253 [2024-07-14 02:10:53.918154] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:48.253 [2024-07-14 02:10:53.918245] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.511 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.511 [2024-07-14 02:10:53.984832] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.511 [2024-07-14 02:10:54.070706] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.511 [2024-07-14 02:10:54.070761] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.511 [2024-07-14 02:10:54.070789] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.511 [2024-07-14 02:10:54.070801] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.511 [2024-07-14 02:10:54.070811] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.511 [2024-07-14 02:10:54.070844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.511 02:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:48.511 02:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:48.511 02:10:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:48.511 02:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:48.511 02:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.511 02:10:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.511 02:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.392SeEuL9o 00:23:48.511 02:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.392SeEuL9o 00:23:48.511 02:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:48.769 [2024-07-14 02:10:54.424504] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.769 02:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:49.334 02:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:49.334 [2024-07-14 02:10:54.974034] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:49.334 [2024-07-14 02:10:54.974296] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.334 02:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:49.592 malloc0 00:23:49.592 02:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:49.850 02:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.392SeEuL9o 00:23:50.108 [2024-07-14 02:10:55.732544] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:50.108 02:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1631747 00:23:50.108 02:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.108 02:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1631747 /var/tmp/bdevperf.sock 00:23:50.108 02:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1631747 ']' 00:23:50.108 02:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.108 02:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:50.108 02:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:50.108 02:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.108 02:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.108 02:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.108 [2024-07-14 02:10:55.792066] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:50.108 [2024-07-14 02:10:55.792143] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631747 ] 00:23:50.365 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.365 [2024-07-14 02:10:55.850971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.365 [2024-07-14 02:10:55.937560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.365 02:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:50.365 02:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:50.365 02:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.392SeEuL9o 00:23:50.930 02:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:50.930 [2024-07-14 02:10:56.597802] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.187 nvme0n1 00:23:51.187 02:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:51.187 Running I/O for 1 seconds... 00:23:52.559 00:23:52.559 Latency(us) 00:23:52.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.559 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:52.559 Verification LBA range: start 0x0 length 0x2000 00:23:52.559 nvme0n1 : 1.06 1690.56 6.60 0.00 0.00 73855.22 6407.96 114178.28 00:23:52.559 =================================================================================================================== 00:23:52.559 Total : 1690.56 6.60 0.00 0.00 73855.22 6407.96 114178.28 00:23:52.559 0 00:23:52.559 02:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1631747 00:23:52.559 02:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1631747 ']' 00:23:52.559 02:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1631747 00:23:52.559 02:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:52.559 02:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.559 02:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1631747 00:23:52.559 02:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:52.559 02:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:52.559 02:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1631747' 00:23:52.559 killing process with pid 1631747 00:23:52.559 02:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1631747 00:23:52.559 Received shutdown signal, test time was about 1.000000 seconds 00:23:52.559 00:23:52.559 Latency(us) 00:23:52.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.559 =================================================================================================================== 00:23:52.559 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.559 02:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1631747 00:23:52.559 02:10:58 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1631468 00:23:52.559 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1631468 ']' 00:23:52.559 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1631468 00:23:52.559 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:52.559 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.559 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1631468 00:23:52.559 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:52.559 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:52.559 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1631468' 00:23:52.559 killing process with pid 1631468 00:23:52.559 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1631468 00:23:52.559 [2024-07-14 02:10:58.179077] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:52.559 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1631468 00:23:52.818 02:10:58 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:52.818 02:10:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:52.818 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:52.818 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.818 02:10:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1632029 00:23:52.818 02:10:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:52.818 02:10:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1632029 00:23:52.818 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1632029 ']' 00:23:52.818 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.818 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.818 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.818 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.818 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.818 [2024-07-14 02:10:58.470069] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:52.818 [2024-07-14 02:10:58.470153] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.818 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.077 [2024-07-14 02:10:58.542068] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.077 [2024-07-14 02:10:58.632780] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.077 [2024-07-14 02:10:58.632844] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.077 [2024-07-14 02:10:58.632879] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.077 [2024-07-14 02:10:58.632901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.077 [2024-07-14 02:10:58.632913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.077 [2024-07-14 02:10:58.632946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.077 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.077 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:53.077 02:10:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.077 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:53.077 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.335 02:10:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.335 02:10:58 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:53.335 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.335 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.335 [2024-07-14 02:10:58.784111] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.335 malloc0 00:23:53.335 [2024-07-14 02:10:58.816645] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.335 [2024-07-14 02:10:58.816908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.335 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.335 02:10:58 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1632173 00:23:53.335 02:10:58 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:53.335 02:10:58 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1632173 /var/tmp/bdevperf.sock 00:23:53.335 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1632173 ']' 00:23:53.335 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.335 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:53.335 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.335 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:53.335 02:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.335 [2024-07-14 02:10:58.887428] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:53.335 [2024-07-14 02:10:58.887502] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632173 ] 00:23:53.335 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.335 [2024-07-14 02:10:58.949886] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.594 [2024-07-14 02:10:59.041543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.594 02:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.594 02:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:53.594 02:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.392SeEuL9o 00:23:53.852 02:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:54.110 [2024-07-14 02:10:59.690130] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:54.110 nvme0n1 00:23:54.110 02:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:54.368 Running I/O for 1 seconds... 00:23:55.301 00:23:55.301 Latency(us) 00:23:55.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.301 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:55.301 Verification LBA range: start 0x0 length 0x2000 00:23:55.301 nvme0n1 : 1.07 1703.87 6.66 0.00 0.00 73229.30 6213.78 111071.38 00:23:55.301 =================================================================================================================== 00:23:55.301 Total : 1703.87 6.66 0.00 0.00 73229.30 6213.78 111071.38 00:23:55.301 0 00:23:55.301 02:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:55.301 02:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.301 02:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.560 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.560 02:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:55.560 "subsystems": [ 00:23:55.560 { 00:23:55.560 "subsystem": "keyring", 00:23:55.560 "config": [ 00:23:55.560 { 00:23:55.560 "method": "keyring_file_add_key", 00:23:55.560 "params": { 00:23:55.560 "name": "key0", 00:23:55.560 "path": "/tmp/tmp.392SeEuL9o" 00:23:55.560 } 00:23:55.560 } 00:23:55.560 ] 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "subsystem": "iobuf", 00:23:55.560 "config": [ 00:23:55.560 { 00:23:55.560 "method": "iobuf_set_options", 00:23:55.560 "params": { 00:23:55.560 "small_pool_count": 8192, 00:23:55.560 "large_pool_count": 1024, 00:23:55.560 "small_bufsize": 8192, 00:23:55.560 "large_bufsize": 135168 00:23:55.560 } 00:23:55.560 } 00:23:55.560 ] 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "subsystem": "sock", 00:23:55.560 "config": [ 00:23:55.560 { 00:23:55.560 "method": "sock_set_default_impl", 00:23:55.560 "params": { 00:23:55.560 "impl_name": "posix" 00:23:55.560 } 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "method": "sock_impl_set_options", 00:23:55.560 "params": { 00:23:55.560 "impl_name": "ssl", 00:23:55.560 "recv_buf_size": 4096, 00:23:55.560 "send_buf_size": 4096, 00:23:55.560 "enable_recv_pipe": true, 00:23:55.560 "enable_quickack": false, 00:23:55.560 "enable_placement_id": 0, 00:23:55.560 "enable_zerocopy_send_server": true, 00:23:55.560 "enable_zerocopy_send_client": false, 00:23:55.560 "zerocopy_threshold": 0, 00:23:55.560 "tls_version": 0, 00:23:55.560 "enable_ktls": false 00:23:55.560 } 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "method": "sock_impl_set_options", 00:23:55.560 "params": { 00:23:55.560 "impl_name": "posix", 00:23:55.560 "recv_buf_size": 2097152, 00:23:55.560 "send_buf_size": 2097152, 00:23:55.560 "enable_recv_pipe": true, 00:23:55.560 "enable_quickack": false, 00:23:55.560 "enable_placement_id": 0, 00:23:55.560 "enable_zerocopy_send_server": true, 00:23:55.560 "enable_zerocopy_send_client": false, 00:23:55.560 "zerocopy_threshold": 0, 00:23:55.560 "tls_version": 0, 00:23:55.560 "enable_ktls": false 00:23:55.560 } 00:23:55.560 } 00:23:55.560 ] 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "subsystem": "vmd", 00:23:55.560 "config": [] 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "subsystem": "accel", 00:23:55.560 "config": [ 00:23:55.560 { 00:23:55.560 "method": "accel_set_options", 00:23:55.560 "params": { 00:23:55.560 "small_cache_size": 128, 00:23:55.560 "large_cache_size": 16, 00:23:55.560 "task_count": 2048, 00:23:55.560 "sequence_count": 2048, 00:23:55.560 "buf_count": 2048 00:23:55.560 } 00:23:55.560 } 00:23:55.560 ] 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "subsystem": "bdev", 00:23:55.560 "config": [ 00:23:55.560 { 00:23:55.560 "method": "bdev_set_options", 00:23:55.560 "params": { 00:23:55.560 "bdev_io_pool_size": 65535, 00:23:55.560 "bdev_io_cache_size": 256, 00:23:55.560 "bdev_auto_examine": true, 00:23:55.560 "iobuf_small_cache_size": 128, 00:23:55.560 "iobuf_large_cache_size": 16 00:23:55.560 } 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "method": "bdev_raid_set_options", 00:23:55.560 "params": { 00:23:55.560 "process_window_size_kb": 1024 00:23:55.560 } 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "method": "bdev_iscsi_set_options", 00:23:55.560 "params": { 00:23:55.560 "timeout_sec": 30 00:23:55.560 } 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "method": "bdev_nvme_set_options", 00:23:55.560 "params": { 00:23:55.560 "action_on_timeout": "none", 00:23:55.560 "timeout_us": 0, 00:23:55.560 "timeout_admin_us": 0, 00:23:55.560 "keep_alive_timeout_ms": 10000, 00:23:55.560 "arbitration_burst": 0, 00:23:55.560 "low_priority_weight": 0, 00:23:55.560 "medium_priority_weight": 0, 00:23:55.560 "high_priority_weight": 0, 00:23:55.560 "nvme_adminq_poll_period_us": 10000, 00:23:55.560 "nvme_ioq_poll_period_us": 0, 00:23:55.560 "io_queue_requests": 0, 00:23:55.560 "delay_cmd_submit": true, 00:23:55.560 "transport_retry_count": 4, 00:23:55.560 "bdev_retry_count": 3, 00:23:55.560 "transport_ack_timeout": 0, 00:23:55.560 "ctrlr_loss_timeout_sec": 0, 00:23:55.560 "reconnect_delay_sec": 0, 00:23:55.560 "fast_io_fail_timeout_sec": 0, 00:23:55.560 "disable_auto_failback": false, 00:23:55.560 "generate_uuids": false, 00:23:55.560 "transport_tos": 0, 00:23:55.560 "nvme_error_stat": false, 00:23:55.560 "rdma_srq_size": 0, 00:23:55.560 "io_path_stat": false, 00:23:55.560 "allow_accel_sequence": false, 00:23:55.560 "rdma_max_cq_size": 0, 00:23:55.560 "rdma_cm_event_timeout_ms": 0, 00:23:55.560 "dhchap_digests": [ 00:23:55.560 "sha256", 00:23:55.560 "sha384", 00:23:55.560 "sha512" 00:23:55.560 ], 00:23:55.560 "dhchap_dhgroups": [ 00:23:55.560 "null", 00:23:55.560 "ffdhe2048", 00:23:55.560 "ffdhe3072", 00:23:55.560 "ffdhe4096", 00:23:55.560 "ffdhe6144", 00:23:55.560 "ffdhe8192" 00:23:55.560 ] 00:23:55.560 } 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "method": "bdev_nvme_set_hotplug", 00:23:55.560 "params": { 00:23:55.560 "period_us": 100000, 00:23:55.560 "enable": false 00:23:55.560 } 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "method": "bdev_malloc_create", 00:23:55.560 "params": { 00:23:55.560 "name": "malloc0", 00:23:55.560 "num_blocks": 8192, 00:23:55.560 "block_size": 4096, 00:23:55.560 "physical_block_size": 4096, 00:23:55.560 "uuid": "1d7b63cf-35ba-4786-bd37-d0dedaa4af68", 00:23:55.560 "optimal_io_boundary": 0 00:23:55.560 } 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "method": "bdev_wait_for_examine" 00:23:55.560 } 00:23:55.560 ] 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "subsystem": "nbd", 00:23:55.560 "config": [] 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "subsystem": "scheduler", 00:23:55.560 "config": [ 00:23:55.560 { 00:23:55.560 "method": "framework_set_scheduler", 00:23:55.560 "params": { 00:23:55.560 "name": "static" 00:23:55.560 } 00:23:55.560 } 00:23:55.560 ] 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "subsystem": "nvmf", 00:23:55.560 "config": [ 00:23:55.560 { 00:23:55.560 "method": "nvmf_set_config", 00:23:55.560 "params": { 00:23:55.560 "discovery_filter": "match_any", 00:23:55.560 "admin_cmd_passthru": { 00:23:55.560 "identify_ctrlr": false 00:23:55.560 } 00:23:55.560 } 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "method": "nvmf_set_max_subsystems", 00:23:55.560 "params": { 00:23:55.560 "max_subsystems": 1024 00:23:55.560 } 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "method": "nvmf_set_crdt", 00:23:55.560 "params": { 00:23:55.560 "crdt1": 0, 00:23:55.560 "crdt2": 0, 00:23:55.560 "crdt3": 0 00:23:55.560 } 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "method": "nvmf_create_transport", 00:23:55.560 "params": { 00:23:55.560 "trtype": "TCP", 00:23:55.560 "max_queue_depth": 128, 00:23:55.560 "max_io_qpairs_per_ctrlr": 127, 00:23:55.560 "in_capsule_data_size": 4096, 00:23:55.560 "max_io_size": 131072, 00:23:55.560 "io_unit_size": 131072, 00:23:55.560 "max_aq_depth": 128, 00:23:55.560 "num_shared_buffers": 511, 00:23:55.560 "buf_cache_size": 4294967295, 00:23:55.560 "dif_insert_or_strip": false, 00:23:55.560 "zcopy": false, 00:23:55.560 "c2h_success": false, 00:23:55.560 "sock_priority": 0, 00:23:55.560 "abort_timeout_sec": 1, 00:23:55.560 "ack_timeout": 0, 00:23:55.560 "data_wr_pool_size": 0 00:23:55.560 } 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "method": "nvmf_create_subsystem", 00:23:55.560 "params": { 00:23:55.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.560 "allow_any_host": false, 00:23:55.560 "serial_number": "00000000000000000000", 00:23:55.560 "model_number": "SPDK bdev Controller", 00:23:55.560 "max_namespaces": 32, 00:23:55.560 "min_cntlid": 1, 00:23:55.560 "max_cntlid": 65519, 00:23:55.560 "ana_reporting": false 00:23:55.560 } 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "method": "nvmf_subsystem_add_host", 00:23:55.560 "params": { 00:23:55.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.560 "host": "nqn.2016-06.io.spdk:host1", 00:23:55.560 "psk": "key0" 00:23:55.560 } 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "method": "nvmf_subsystem_add_ns", 00:23:55.560 "params": { 00:23:55.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.560 "namespace": { 00:23:55.560 "nsid": 1, 00:23:55.560 "bdev_name": "malloc0", 00:23:55.560 "nguid": "1D7B63CF35BA4786BD37D0DEDAA4AF68", 00:23:55.560 "uuid": "1d7b63cf-35ba-4786-bd37-d0dedaa4af68", 00:23:55.560 "no_auto_visible": false 00:23:55.560 } 00:23:55.560 } 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "method": "nvmf_subsystem_add_listener", 00:23:55.560 "params": { 00:23:55.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.560 "listen_address": { 00:23:55.560 "trtype": "TCP", 00:23:55.560 "adrfam": "IPv4", 00:23:55.560 "traddr": "10.0.0.2", 00:23:55.560 "trsvcid": "4420" 00:23:55.560 }, 00:23:55.560 "secure_channel": true 00:23:55.560 } 00:23:55.560 } 00:23:55.560 ] 00:23:55.560 } 00:23:55.560 ] 00:23:55.561 }' 00:23:55.561 02:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:55.819 02:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:55.819 "subsystems": [ 00:23:55.819 { 00:23:55.819 "subsystem": "keyring", 00:23:55.819 "config": [ 00:23:55.819 { 00:23:55.819 "method": "keyring_file_add_key", 00:23:55.819 "params": { 00:23:55.819 "name": "key0", 00:23:55.819 "path": "/tmp/tmp.392SeEuL9o" 00:23:55.819 } 00:23:55.819 } 00:23:55.819 ] 00:23:55.819 }, 00:23:55.819 { 00:23:55.819 "subsystem": "iobuf", 00:23:55.819 "config": [ 00:23:55.819 { 00:23:55.819 "method": "iobuf_set_options", 00:23:55.819 "params": { 00:23:55.819 "small_pool_count": 8192, 00:23:55.819 "large_pool_count": 1024, 00:23:55.819 "small_bufsize": 8192, 00:23:55.819 "large_bufsize": 135168 00:23:55.819 } 00:23:55.819 } 00:23:55.819 ] 00:23:55.819 }, 00:23:55.819 { 00:23:55.819 "subsystem": "sock", 00:23:55.819 "config": [ 00:23:55.819 { 00:23:55.819 "method": "sock_set_default_impl", 00:23:55.819 "params": { 00:23:55.819 "impl_name": "posix" 00:23:55.819 } 00:23:55.819 }, 00:23:55.819 { 00:23:55.819 "method": "sock_impl_set_options", 00:23:55.819 "params": { 00:23:55.819 "impl_name": "ssl", 00:23:55.819 "recv_buf_size": 4096, 00:23:55.819 "send_buf_size": 4096, 00:23:55.819 "enable_recv_pipe": true, 00:23:55.819 "enable_quickack": false, 00:23:55.819 "enable_placement_id": 0, 00:23:55.819 "enable_zerocopy_send_server": true, 00:23:55.819 "enable_zerocopy_send_client": false, 00:23:55.819 "zerocopy_threshold": 0, 00:23:55.819 "tls_version": 0, 00:23:55.819 "enable_ktls": false 00:23:55.819 } 00:23:55.819 }, 00:23:55.819 { 00:23:55.819 "method": "sock_impl_set_options", 00:23:55.819 "params": { 00:23:55.819 "impl_name": "posix", 00:23:55.819 "recv_buf_size": 2097152, 00:23:55.819 "send_buf_size": 2097152, 00:23:55.819 "enable_recv_pipe": true, 00:23:55.819 "enable_quickack": false, 00:23:55.819 "enable_placement_id": 0, 00:23:55.819 "enable_zerocopy_send_server": true, 00:23:55.819 "enable_zerocopy_send_client": false, 00:23:55.819 "zerocopy_threshold": 0, 00:23:55.819 "tls_version": 0, 00:23:55.819 "enable_ktls": false 00:23:55.819 } 00:23:55.819 } 00:23:55.819 ] 00:23:55.819 }, 00:23:55.819 { 00:23:55.819 "subsystem": "vmd", 00:23:55.819 "config": [] 00:23:55.819 }, 00:23:55.819 { 00:23:55.819 "subsystem": "accel", 00:23:55.819 "config": [ 00:23:55.819 { 00:23:55.819 "method": "accel_set_options", 00:23:55.819 "params": { 00:23:55.819 "small_cache_size": 128, 00:23:55.819 "large_cache_size": 16, 00:23:55.819 "task_count": 2048, 00:23:55.819 "sequence_count": 2048, 00:23:55.819 "buf_count": 2048 00:23:55.819 } 00:23:55.819 } 00:23:55.819 ] 00:23:55.819 }, 00:23:55.819 { 00:23:55.819 "subsystem": "bdev", 00:23:55.819 "config": [ 00:23:55.819 { 00:23:55.819 "method": "bdev_set_options", 00:23:55.819 "params": { 00:23:55.819 "bdev_io_pool_size": 65535, 00:23:55.819 "bdev_io_cache_size": 256, 00:23:55.819 "bdev_auto_examine": true, 00:23:55.819 "iobuf_small_cache_size": 128, 00:23:55.819 "iobuf_large_cache_size": 16 00:23:55.819 } 00:23:55.820 }, 00:23:55.820 { 00:23:55.820 "method": "bdev_raid_set_options", 00:23:55.820 "params": { 00:23:55.820 "process_window_size_kb": 1024 00:23:55.820 } 00:23:55.820 }, 00:23:55.820 { 00:23:55.820 "method": "bdev_iscsi_set_options", 00:23:55.820 "params": { 00:23:55.820 "timeout_sec": 30 00:23:55.820 } 00:23:55.820 }, 00:23:55.820 { 00:23:55.820 "method": "bdev_nvme_set_options", 00:23:55.820 "params": { 00:23:55.820 "action_on_timeout": "none", 00:23:55.820 "timeout_us": 0, 00:23:55.820 "timeout_admin_us": 0, 00:23:55.820 "keep_alive_timeout_ms": 10000, 00:23:55.820 "arbitration_burst": 0, 00:23:55.820 "low_priority_weight": 0, 00:23:55.820 "medium_priority_weight": 0, 00:23:55.820 "high_priority_weight": 0, 00:23:55.820 "nvme_adminq_poll_period_us": 10000, 00:23:55.820 "nvme_ioq_poll_period_us": 0, 00:23:55.820 "io_queue_requests": 512, 00:23:55.820 "delay_cmd_submit": true, 00:23:55.820 "transport_retry_count": 4, 00:23:55.820 "bdev_retry_count": 3, 00:23:55.820 "transport_ack_timeout": 0, 00:23:55.820 "ctrlr_loss_timeout_sec": 0, 00:23:55.820 "reconnect_delay_sec": 0, 00:23:55.820 "fast_io_fail_timeout_sec": 0, 00:23:55.820 "disable_auto_failback": false, 00:23:55.820 "generate_uuids": false, 00:23:55.820 "transport_tos": 0, 00:23:55.820 "nvme_error_stat": false, 00:23:55.820 "rdma_srq_size": 0, 00:23:55.820 "io_path_stat": false, 00:23:55.820 "allow_accel_sequence": false, 00:23:55.820 "rdma_max_cq_size": 0, 00:23:55.820 "rdma_cm_event_timeout_ms": 0, 00:23:55.820 "dhchap_digests": [ 00:23:55.820 "sha256", 00:23:55.820 "sha384", 00:23:55.820 "sha512" 00:23:55.820 ], 00:23:55.820 "dhchap_dhgroups": [ 00:23:55.820 "null", 00:23:55.820 "ffdhe2048", 00:23:55.820 "ffdhe3072", 00:23:55.820 "ffdhe4096", 00:23:55.820 "ffdhe6144", 00:23:55.820 "ffdhe8192" 00:23:55.820 ] 00:23:55.820 } 00:23:55.820 }, 00:23:55.820 { 00:23:55.820 "method": "bdev_nvme_attach_controller", 00:23:55.820 "params": { 00:23:55.820 "name": "nvme0", 00:23:55.820 "trtype": "TCP", 00:23:55.820 "adrfam": "IPv4", 00:23:55.820 "traddr": "10.0.0.2", 00:23:55.820 "trsvcid": "4420", 00:23:55.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.820 "prchk_reftag": false, 00:23:55.820 "prchk_guard": false, 00:23:55.820 "ctrlr_loss_timeout_sec": 0, 00:23:55.820 "reconnect_delay_sec": 0, 00:23:55.820 "fast_io_fail_timeout_sec": 0, 00:23:55.820 "psk": "key0", 00:23:55.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:55.820 "hdgst": false, 00:23:55.820 "ddgst": false 00:23:55.820 } 00:23:55.820 }, 00:23:55.820 { 00:23:55.820 "method": "bdev_nvme_set_hotplug", 00:23:55.820 "params": { 00:23:55.820 "period_us": 100000, 00:23:55.820 "enable": false 00:23:55.820 } 00:23:55.820 }, 00:23:55.820 { 00:23:55.820 "method": "bdev_enable_histogram", 00:23:55.820 "params": { 00:23:55.820 "name": "nvme0n1", 00:23:55.820 "enable": true 00:23:55.820 } 00:23:55.820 }, 00:23:55.820 { 00:23:55.820 "method": "bdev_wait_for_examine" 00:23:55.820 } 00:23:55.820 ] 00:23:55.820 }, 00:23:55.820 { 00:23:55.820 "subsystem": "nbd", 00:23:55.820 "config": [] 00:23:55.820 } 00:23:55.820 ] 00:23:55.820 }' 00:23:55.820 02:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1632173 00:23:55.820 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1632173 ']' 00:23:55.820 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1632173 00:23:55.820 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:55.820 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.820 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1632173 00:23:55.820 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:55.820 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:55.820 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1632173' 00:23:55.820 killing process with pid 1632173 00:23:55.820 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1632173 00:23:55.820 Received shutdown signal, test time was about 1.000000 seconds 00:23:55.820 00:23:55.820 Latency(us) 00:23:55.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.820 =================================================================================================================== 00:23:55.820 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:55.820 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1632173 00:23:56.078 02:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1632029 00:23:56.078 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1632029 ']' 00:23:56.078 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1632029 00:23:56.078 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:56.078 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.078 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1632029 00:23:56.078 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:56.078 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:56.078 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1632029' 00:23:56.078 killing process with pid 1632029 00:23:56.078 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1632029 00:23:56.078 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1632029 00:23:56.337 02:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:56.337 02:11:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:56.337 02:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:56.337 "subsystems": [ 00:23:56.337 { 00:23:56.337 "subsystem": "keyring", 00:23:56.337 "config": [ 00:23:56.337 { 00:23:56.337 "method": "keyring_file_add_key", 00:23:56.337 "params": { 00:23:56.337 "name": "key0", 00:23:56.337 "path": "/tmp/tmp.392SeEuL9o" 00:23:56.337 } 00:23:56.337 } 00:23:56.337 ] 00:23:56.337 }, 00:23:56.337 { 00:23:56.337 "subsystem": "iobuf", 00:23:56.337 "config": [ 00:23:56.337 { 00:23:56.337 "method": "iobuf_set_options", 00:23:56.337 "params": { 00:23:56.337 "small_pool_count": 8192, 00:23:56.337 "large_pool_count": 1024, 00:23:56.337 "small_bufsize": 8192, 00:23:56.337 "large_bufsize": 135168 00:23:56.337 } 00:23:56.337 } 00:23:56.337 ] 00:23:56.337 }, 00:23:56.337 { 00:23:56.337 "subsystem": "sock", 00:23:56.337 "config": [ 00:23:56.337 { 00:23:56.337 "method": "sock_set_default_impl", 00:23:56.337 "params": { 00:23:56.337 "impl_name": "posix" 00:23:56.337 } 00:23:56.337 }, 00:23:56.337 { 00:23:56.337 "method": "sock_impl_set_options", 00:23:56.337 "params": { 00:23:56.337 "impl_name": "ssl", 00:23:56.337 "recv_buf_size": 4096, 00:23:56.337 "send_buf_size": 4096, 00:23:56.337 "enable_recv_pipe": true, 00:23:56.337 "enable_quickack": false, 00:23:56.337 "enable_placement_id": 0, 00:23:56.337 "enable_zerocopy_send_server": true, 00:23:56.337 "enable_zerocopy_send_client": false, 00:23:56.337 "zerocopy_threshold": 0, 00:23:56.337 "tls_version": 0, 00:23:56.337 "enable_ktls": false 00:23:56.337 } 00:23:56.337 }, 00:23:56.337 { 00:23:56.337 "method": "sock_impl_set_options", 00:23:56.337 "params": { 00:23:56.337 "impl_name": "posix", 00:23:56.337 "recv_buf_size": 2097152, 00:23:56.337 "send_buf_size": 2097152, 00:23:56.337 "enable_recv_pipe": true, 00:23:56.337 "enable_quickack": false, 00:23:56.337 "enable_placement_id": 0, 00:23:56.337 "enable_zerocopy_send_server": true, 00:23:56.337 "enable_zerocopy_send_client": false, 00:23:56.337 "zerocopy_threshold": 0, 00:23:56.337 "tls_version": 0, 00:23:56.337 "enable_ktls": false 00:23:56.337 } 00:23:56.337 } 00:23:56.337 ] 00:23:56.337 }, 00:23:56.337 { 00:23:56.337 "subsystem": "vmd", 00:23:56.337 "config": [] 00:23:56.337 }, 00:23:56.337 { 00:23:56.337 "subsystem": "accel", 00:23:56.337 "config": [ 00:23:56.337 { 00:23:56.337 "method": "accel_set_options", 00:23:56.337 "params": { 00:23:56.337 "small_cache_size": 128, 00:23:56.337 "large_cache_size": 16, 00:23:56.337 "task_count": 2048, 00:23:56.337 "sequence_count": 2048, 00:23:56.337 "buf_count": 2048 00:23:56.337 } 00:23:56.337 } 00:23:56.337 ] 00:23:56.337 }, 00:23:56.337 { 00:23:56.337 "subsystem": "bdev", 00:23:56.337 "config": [ 00:23:56.337 { 00:23:56.337 "method": "bdev_set_options", 00:23:56.337 "params": { 00:23:56.337 "bdev_io_pool_size": 65535, 00:23:56.337 "bdev_io_cache_size": 256, 00:23:56.337 "bdev_auto_examine": true, 00:23:56.337 "iobuf_small_cache_size": 128, 00:23:56.337 "iobuf_large_cache_size": 16 00:23:56.337 } 00:23:56.337 }, 00:23:56.337 { 00:23:56.337 "method": "bdev_raid_set_options", 00:23:56.337 "params": { 00:23:56.337 "process_window_size_kb": 1024 00:23:56.337 } 00:23:56.337 }, 00:23:56.337 { 00:23:56.338 "method": "bdev_iscsi_set_options", 00:23:56.338 "params": { 00:23:56.338 "timeout_sec": 30 00:23:56.338 } 00:23:56.338 }, 00:23:56.338 { 00:23:56.338 "method": "bdev_nvme_set_options", 00:23:56.338 "params": { 00:23:56.338 "action_on_timeout": "none", 00:23:56.338 "timeout_us": 0, 00:23:56.338 "timeout_admin_us": 0, 00:23:56.338 "keep_alive_timeout_ms": 10000, 00:23:56.338 "arbitration_burst": 0, 00:23:56.338 "low_priority_weight": 0, 00:23:56.338 "medium_priority_weight": 0, 00:23:56.338 "high_priority_weight": 0, 00:23:56.338 "nvme_adminq_poll_period_us": 10000, 00:23:56.338 "nvme_ioq_poll_period_us": 0, 00:23:56.338 "io_queue_requests": 0, 00:23:56.338 "delay_cmd_submit": true, 00:23:56.338 "transport_retry_count": 4, 00:23:56.338 "bdev_retry_count": 3, 00:23:56.338 "transport_ack_timeout": 0, 00:23:56.338 "ctrlr_loss_timeout_sec": 0, 00:23:56.338 "reconnect_delay_sec": 0, 00:23:56.338 "fast_io_fail_timeout_sec": 0, 00:23:56.338 "disable_auto_failback": false, 00:23:56.338 "generate_uuids": false, 00:23:56.338 "transport_tos": 0, 00:23:56.338 "nvme_error_stat": false, 00:23:56.338 "rdma_srq_size": 0, 00:23:56.338 "io_path_stat": false, 00:23:56.338 "allow_accel_sequence": false, 00:23:56.338 "rdma_max_cq_size": 0, 00:23:56.338 "rdma_cm_event_timeout_ms": 0, 00:23:56.338 "dhchap_digests": [ 00:23:56.338 "sha256", 00:23:56.338 "sha384", 00:23:56.338 "sha512" 00:23:56.338 ], 00:23:56.338 "dhchap_dhgroups": [ 00:23:56.338 "null", 00:23:56.338 "ffdhe2048", 00:23:56.338 "ffdhe3072", 00:23:56.338 "ffdhe4096", 00:23:56.338 "ffdhe6144", 00:23:56.338 "ffdhe8192" 00:23:56.338 ] 00:23:56.338 } 00:23:56.338 }, 00:23:56.338 { 00:23:56.338 "method": "bdev_nvme_set_hotplug", 00:23:56.338 "params": { 00:23:56.338 "period_us": 100000, 00:23:56.338 "enable": false 00:23:56.338 } 00:23:56.338 }, 00:23:56.338 { 00:23:56.338 "method": "bdev_malloc_create", 00:23:56.338 "params": { 00:23:56.338 "name": "malloc0", 00:23:56.338 "num_blocks": 8192, 00:23:56.338 "block_size": 4096, 00:23:56.338 "physical_block_size": 4096, 00:23:56.338 "uuid": "1d7b63cf-35ba-4786-bd37-d0dedaa4af68", 00:23:56.338 "optimal_io_boundary": 0 00:23:56.338 } 00:23:56.338 }, 00:23:56.338 { 00:23:56.338 "method": "bdev_wait_for_examine" 00:23:56.338 } 00:23:56.338 ] 00:23:56.338 }, 00:23:56.338 { 00:23:56.338 "subsystem": "nbd", 00:23:56.338 "config": [] 00:23:56.338 }, 00:23:56.338 { 00:23:56.338 "subsystem": "scheduler", 00:23:56.338 "config": [ 00:23:56.338 { 00:23:56.338 "method": "framework_set_scheduler", 00:23:56.338 "params": { 00:23:56.338 "name": "static" 00:23:56.338 } 00:23:56.338 } 00:23:56.338 ] 00:23:56.338 }, 00:23:56.338 { 00:23:56.338 "subsystem": "nvmf", 00:23:56.338 "config": [ 00:23:56.338 { 00:23:56.338 "method": "nvmf_set_config", 00:23:56.338 "params": { 00:23:56.338 "discovery_filter": "match_any", 00:23:56.338 "admin_cmd_passthru": { 00:23:56.338 "identify_ctrlr": false 00:23:56.338 } 00:23:56.338 } 00:23:56.338 }, 00:23:56.338 { 00:23:56.338 "method": "nvmf_set_max_subsystems", 00:23:56.338 "params": { 00:23:56.338 "max_subsystems": 1024 00:23:56.338 } 00:23:56.338 }, 00:23:56.338 { 00:23:56.338 "method": "nvmf_set_crdt", 00:23:56.338 "params": { 00:23:56.338 "crdt1": 0, 00:23:56.338 "crdt2": 0, 00:23:56.338 "crdt3": 0 00:23:56.338 } 00:23:56.338 }, 00:23:56.338 { 00:23:56.338 "method": "nvmf_create_transport", 00:23:56.338 "params": { 00:23:56.338 "trtype": "TCP", 00:23:56.338 "max_queue_depth": 128, 00:23:56.338 "max_io_qpairs_per_ctrlr": 127, 00:23:56.338 "in_capsule_data_size": 4096, 00:23:56.338 "max_io_size": 131072, 00:23:56.338 "io_unit_size": 131072, 00:23:56.338 "max_aq_depth": 128, 00:23:56.338 "num_shared_buffers": 511, 00:23:56.338 "buf_cache_size": 4294967295, 00:23:56.338 "dif_insert_or_strip": false, 00:23:56.338 "zcopy": false, 00:23:56.338 "c2h_success": false, 00:23:56.338 "sock_priority": 0, 00:23:56.338 "abort_timeout_sec": 1, 00:23:56.338 "ack_timeout": 0, 00:23:56.338 "data_wr_pool_size": 0 00:23:56.338 } 00:23:56.338 }, 00:23:56.338 { 00:23:56.338 "method": "nvmf_create_subsystem", 00:23:56.338 "params": { 00:23:56.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.338 "allow_any_host": false, 00:23:56.338 "serial_number": "00000000000000000000", 00:23:56.338 "model_number": "SPDK bdev Controller", 00:23:56.338 "max_namespaces": 32, 00:23:56.338 "min_cntlid": 1, 00:23:56.338 "max_cntlid": 65519, 00:23:56.338 "ana_reporting": false 00:23:56.338 } 00:23:56.338 }, 00:23:56.338 { 00:23:56.338 "method": "nvmf_subsystem_add_host", 00:23:56.338 "params": { 00:23:56.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.338 "host": "nqn.2016-06.io.spdk:host1", 00:23:56.338 "psk": "key0" 00:23:56.338 } 00:23:56.338 }, 00:23:56.338 { 00:23:56.338 "method": "nvmf_subsystem_add_ns", 00:23:56.338 "params": { 00:23:56.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.338 "namespace": { 00:23:56.338 "nsid": 1, 00:23:56.338 "bdev_name": "malloc0", 00:23:56.338 "nguid": "1D7B63CF35BA4786BD37D0DEDAA4AF68", 00:23:56.338 "uuid": "1d7b63cf-35ba-4786-bd37-d0dedaa4af68", 00:23:56.338 "no_auto_visible": false 00:23:56.338 } 00:23:56.338 } 00:23:56.338 }, 00:23:56.338 { 00:23:56.338 "method": "nvmf_subsystem_add_listener", 00:23:56.338 "params": { 00:23:56.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.338 "listen_address": { 00:23:56.338 "trtype": "TCP", 00:23:56.338 "adrfam": "IPv4", 00:23:56.338 "traddr": "10.0.0.2", 00:23:56.338 "trsvcid": "4420" 00:23:56.338 }, 00:23:56.338 "secure_channel": true 00:23:56.338 } 00:23:56.338 } 00:23:56.338 ] 00:23:56.338 } 00:23:56.338 ] 00:23:56.338 }' 00:23:56.338 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:56.338 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.338 02:11:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1632465 00:23:56.338 02:11:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:56.338 02:11:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1632465 00:23:56.338 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1632465 ']' 00:23:56.338 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.338 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.338 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.338 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.338 02:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.338 [2024-07-14 02:11:01.939284] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:56.338 [2024-07-14 02:11:01.939383] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.338 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.338 [2024-07-14 02:11:02.009275] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.597 [2024-07-14 02:11:02.098942] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.597 [2024-07-14 02:11:02.099004] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.597 [2024-07-14 02:11:02.099020] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.597 [2024-07-14 02:11:02.099033] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.597 [2024-07-14 02:11:02.099045] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.597 [2024-07-14 02:11:02.099128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.862 [2024-07-14 02:11:02.340494] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.862 [2024-07-14 02:11:02.372518] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:56.862 [2024-07-14 02:11:02.385070] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.489 02:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.489 02:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:57.489 02:11:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:57.489 02:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:57.489 02:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.489 02:11:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.489 02:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1632615 00:23:57.489 02:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1632615 /var/tmp/bdevperf.sock 00:23:57.489 02:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1632615 ']' 00:23:57.489 02:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.489 02:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.489 02:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:57.489 02:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.489 02:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:57.489 "subsystems": [ 00:23:57.489 { 00:23:57.489 "subsystem": "keyring", 00:23:57.489 "config": [ 00:23:57.489 { 00:23:57.489 "method": "keyring_file_add_key", 00:23:57.489 "params": { 00:23:57.489 "name": "key0", 00:23:57.489 "path": "/tmp/tmp.392SeEuL9o" 00:23:57.489 } 00:23:57.489 } 00:23:57.489 ] 00:23:57.489 }, 00:23:57.489 { 00:23:57.490 "subsystem": "iobuf", 00:23:57.490 "config": [ 00:23:57.490 { 00:23:57.490 "method": "iobuf_set_options", 00:23:57.490 "params": { 00:23:57.490 "small_pool_count": 8192, 00:23:57.490 "large_pool_count": 1024, 00:23:57.490 "small_bufsize": 8192, 00:23:57.490 "large_bufsize": 135168 00:23:57.490 } 00:23:57.490 } 00:23:57.490 ] 00:23:57.490 }, 00:23:57.490 { 00:23:57.490 "subsystem": "sock", 00:23:57.490 "config": [ 00:23:57.490 { 00:23:57.490 "method": "sock_set_default_impl", 00:23:57.490 "params": { 00:23:57.490 "impl_name": "posix" 00:23:57.490 } 00:23:57.490 }, 00:23:57.490 { 00:23:57.490 "method": "sock_impl_set_options", 00:23:57.490 "params": { 00:23:57.490 "impl_name": "ssl", 00:23:57.490 "recv_buf_size": 4096, 00:23:57.490 "send_buf_size": 4096, 00:23:57.490 "enable_recv_pipe": true, 00:23:57.490 "enable_quickack": false, 00:23:57.490 "enable_placement_id": 0, 00:23:57.490 "enable_zerocopy_send_server": true, 00:23:57.490 "enable_zerocopy_send_client": false, 00:23:57.490 "zerocopy_threshold": 0, 00:23:57.490 "tls_version": 0, 00:23:57.490 "enable_ktls": false 00:23:57.490 } 00:23:57.490 }, 00:23:57.490 { 00:23:57.490 "method": "sock_impl_set_options", 00:23:57.490 "params": { 00:23:57.490 "impl_name": "posix", 00:23:57.490 "recv_buf_size": 2097152, 00:23:57.490 "send_buf_size": 2097152, 00:23:57.490 "enable_recv_pipe": true, 00:23:57.490 "enable_quickack": false, 00:23:57.490 "enable_placement_id": 0, 00:23:57.490 "enable_zerocopy_send_server": true, 00:23:57.490 "enable_zerocopy_send_client": false, 00:23:57.490 "zerocopy_threshold": 0, 00:23:57.490 "tls_version": 0, 00:23:57.490 "enable_ktls": false 00:23:57.490 } 00:23:57.490 } 00:23:57.490 ] 00:23:57.490 }, 00:23:57.490 { 00:23:57.490 "subsystem": "vmd", 00:23:57.490 "config": [] 00:23:57.490 }, 00:23:57.490 { 00:23:57.490 "subsystem": "accel", 00:23:57.490 "config": [ 00:23:57.490 { 00:23:57.490 "method": "accel_set_options", 00:23:57.490 "params": { 00:23:57.490 "small_cache_size": 128, 00:23:57.490 "large_cache_size": 16, 00:23:57.490 "task_count": 2048, 00:23:57.490 "sequence_count": 2048, 00:23:57.490 "buf_count": 2048 00:23:57.490 } 00:23:57.490 } 00:23:57.490 ] 00:23:57.490 }, 00:23:57.490 { 00:23:57.490 "subsystem": "bdev", 00:23:57.490 "config": [ 00:23:57.490 { 00:23:57.490 "method": "bdev_set_options", 00:23:57.490 "params": { 00:23:57.490 "bdev_io_pool_size": 65535, 00:23:57.490 "bdev_io_cache_size": 256, 00:23:57.490 "bdev_auto_examine": true, 00:23:57.490 "iobuf_small_cache_size": 128, 00:23:57.490 "iobuf_large_cache_size": 16 00:23:57.490 } 00:23:57.490 }, 00:23:57.490 { 00:23:57.490 "method": "bdev_raid_set_options", 00:23:57.490 "params": { 00:23:57.490 "process_window_size_kb": 1024 00:23:57.490 } 00:23:57.490 }, 00:23:57.490 { 00:23:57.490 "method": "bdev_iscsi_set_options", 00:23:57.490 "params": { 00:23:57.490 "timeout_sec": 30 00:23:57.490 } 00:23:57.490 }, 00:23:57.490 { 00:23:57.490 "method": "bdev_nvme_set_options", 00:23:57.490 "params": { 00:23:57.490 "action_on_timeout": "none", 00:23:57.490 "timeout_us": 0, 00:23:57.490 "timeout_admin_us": 0, 00:23:57.490 "keep_alive_timeout_ms": 10000, 00:23:57.490 "arbitration_burst": 0, 00:23:57.490 "low_priority_weight": 0, 00:23:57.490 "medium_priority_weight": 0, 00:23:57.490 "high_priority_weight": 0, 00:23:57.490 "nvme_adminq_poll_period_us": 10000, 00:23:57.490 "nvme_ioq_poll_period_us": 0, 00:23:57.490 "io_queue_requests": 512, 00:23:57.490 "delay_cmd_submit": true, 00:23:57.490 "transport_retry_count": 4, 00:23:57.490 "bdev_retry_count": 3, 00:23:57.490 "transport_ack_timeout": 0, 00:23:57.490 "ctrlr_loss_timeout_sec": 0, 00:23:57.490 "reconnect_delay_sec": 0, 00:23:57.490 "fast_io_fail_timeout_sec": 0, 00:23:57.490 "disable_auto_failback": false, 00:23:57.490 "generate_uuids": false, 00:23:57.490 "transport_tos": 0, 00:23:57.490 "nvme_error_stat": false, 00:23:57.490 "rdma_srq_size": 0, 00:23:57.490 "io_path_stat": false, 00:23:57.490 "allow_accel_sequence": false, 00:23:57.490 "rdma_max_cq_size": 0, 00:23:57.490 "rdma_cm_event_timeout_ms": 0, 00:23:57.490 "dhchap_digests": [ 00:23:57.490 "sha256", 00:23:57.490 "sha384", 00:23:57.490 "sha512" 00:23:57.490 ], 00:23:57.490 "dhchap_dhgroups": [ 00:23:57.490 "null", 00:23:57.490 "ffdhe2048", 00:23:57.490 "ffdhe3072", 00:23:57.490 "ffdhe4096", 00:23:57.490 "ffdhe6144", 00:23:57.490 "ffdhe8192" 00:23:57.490 ] 00:23:57.490 } 00:23:57.490 }, 00:23:57.490 { 00:23:57.490 "method": "bdev_nvme_attach_controller", 00:23:57.490 "params": { 00:23:57.490 "name": "nvme0", 00:23:57.490 "trtype": "TCP", 00:23:57.490 "adrfam": "IPv4", 00:23:57.490 "traddr": "10.0.0.2", 00:23:57.490 "trsvcid": "4420", 00:23:57.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.490 "prchk_reftag": false, 00:23:57.490 "prchk_guard": false, 00:23:57.490 "ctrlr_loss_timeout_sec": 0, 00:23:57.490 "reconnect_delay_sec": 0, 00:23:57.490 "fast_io_fail_timeout_sec": 0, 00:23:57.490 "psk": "key0", 00:23:57.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.490 "hdgst": false, 00:23:57.490 "ddgst": false 00:23:57.490 } 00:23:57.490 }, 00:23:57.490 { 00:23:57.490 "method": "bdev_nvme_set_hotplug", 00:23:57.490 "params": { 00:23:57.490 "period_us": 100000, 00:23:57.490 "enable": false 00:23:57.490 } 00:23:57.490 }, 00:23:57.490 { 00:23:57.490 "method": "bdev_enable_histogram", 00:23:57.490 "params": { 00:23:57.490 "name": "nvme0n1", 00:23:57.490 "enable": true 00:23:57.490 } 00:23:57.490 }, 00:23:57.490 { 00:23:57.490 "method": "bdev_wait_for_examine" 00:23:57.490 } 00:23:57.490 ] 00:23:57.490 }, 00:23:57.490 { 00:23:57.490 "subsystem": "nbd", 00:23:57.490 "config": [] 00:23:57.490 } 00:23:57.490 ] 00:23:57.490 }' 00:23:57.490 02:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.490 02:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.490 [2024-07-14 02:11:02.961469] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:57.490 [2024-07-14 02:11:02.961562] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632615 ] 00:23:57.490 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.490 [2024-07-14 02:11:03.027513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.490 [2024-07-14 02:11:03.119967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.754 [2024-07-14 02:11:03.305016] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.318 02:11:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.318 02:11:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:58.318 02:11:03 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:58.318 02:11:03 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:58.576 02:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.576 02:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:58.836 Running I/O for 1 seconds... 00:23:59.774 00:23:59.774 Latency(us) 00:23:59.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.775 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:59.775 Verification LBA range: start 0x0 length 0x2000 00:23:59.775 nvme0n1 : 1.06 1655.17 6.47 0.00 0.00 75423.74 8058.50 113401.55 00:23:59.775 =================================================================================================================== 00:23:59.775 Total : 1655.17 6.47 0.00 0.00 75423.74 8058.50 113401.55 00:23:59.775 0 00:23:59.775 02:11:05 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:59.775 02:11:05 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:59.775 02:11:05 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:59.775 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:23:59.775 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:23:59.775 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:59.775 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:59.775 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:59.775 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:59.775 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:59.775 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:59.775 nvmf_trace.0 00:24:00.034 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:24:00.034 02:11:05 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1632615 00:24:00.034 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1632615 ']' 00:24:00.034 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1632615 00:24:00.034 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:00.034 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.034 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1632615 00:24:00.034 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:00.034 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:00.034 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1632615' 00:24:00.034 killing process with pid 1632615 00:24:00.034 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1632615 00:24:00.034 Received shutdown signal, test time was about 1.000000 seconds 00:24:00.034 00:24:00.034 Latency(us) 00:24:00.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.034 =================================================================================================================== 00:24:00.034 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.034 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1632615 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:00.294 rmmod nvme_tcp 00:24:00.294 rmmod nvme_fabrics 00:24:00.294 rmmod nvme_keyring 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1632465 ']' 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1632465 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1632465 ']' 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1632465 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1632465 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1632465' 00:24:00.294 killing process with pid 1632465 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1632465 00:24:00.294 02:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1632465 00:24:00.555 02:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:00.555 02:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:00.555 02:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:00.555 02:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.555 02:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:00.555 02:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.555 02:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.555 02:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.461 02:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:02.461 02:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.FGv67Nbo8f /tmp/tmp.6pFl1sCf7A /tmp/tmp.392SeEuL9o 00:24:02.461 00:24:02.461 real 1m19.190s 00:24:02.461 user 2m4.341s 00:24:02.461 sys 0m28.862s 00:24:02.461 02:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:02.461 02:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.461 ************************************ 00:24:02.461 END TEST nvmf_tls 00:24:02.461 ************************************ 00:24:02.461 02:11:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:02.461 02:11:08 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:02.461 02:11:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:02.461 02:11:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.461 02:11:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:02.720 ************************************ 00:24:02.720 START TEST nvmf_fips 00:24:02.720 ************************************ 00:24:02.720 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:02.720 * Looking for test storage... 00:24:02.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:02.720 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.720 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:02.720 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.720 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.720 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.720 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.720 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.720 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.720 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.720 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.720 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.720 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.720 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:24:02.721 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:24:02.722 Error setting digest 00:24:02.722 0052E8DF387F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:02.722 0052E8DF387F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:24:02.722 02:11:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:05.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:05.255 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:05.255 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:05.255 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.255 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:05.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:24:05.256 00:24:05.256 --- 10.0.0.2 ping statistics --- 00:24:05.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.256 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:24:05.256 00:24:05.256 --- 10.0.0.1 ping statistics --- 00:24:05.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.256 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1634971 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1634971 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1634971 ']' 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.256 02:11:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:05.256 [2024-07-14 02:11:10.708791] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:05.256 [2024-07-14 02:11:10.708896] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.256 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.256 [2024-07-14 02:11:10.778911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.256 [2024-07-14 02:11:10.869837] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.256 [2024-07-14 02:11:10.869917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.256 [2024-07-14 02:11:10.869933] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.256 [2024-07-14 02:11:10.869959] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.256 [2024-07-14 02:11:10.869969] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.256 [2024-07-14 02:11:10.869998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.515 02:11:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.515 02:11:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:05.515 02:11:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:05.515 02:11:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:05.515 02:11:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:05.515 02:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.515 02:11:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:05.515 02:11:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:05.515 02:11:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:05.515 02:11:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:05.515 02:11:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:05.515 02:11:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:05.515 02:11:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:05.515 02:11:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:05.774 [2024-07-14 02:11:11.242908] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.774 [2024-07-14 02:11:11.258891] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:05.774 [2024-07-14 02:11:11.259115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.774 [2024-07-14 02:11:11.290630] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:05.774 malloc0 00:24:05.774 02:11:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:05.774 02:11:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1635118 00:24:05.774 02:11:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1635118 /var/tmp/bdevperf.sock 00:24:05.774 02:11:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1635118 ']' 00:24:05.774 02:11:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:05.774 02:11:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.774 02:11:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.774 02:11:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.774 02:11:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.774 02:11:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:05.774 [2024-07-14 02:11:11.384263] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:05.774 [2024-07-14 02:11:11.384343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1635118 ] 00:24:05.774 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.774 [2024-07-14 02:11:11.444053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.033 [2024-07-14 02:11:11.533130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.033 02:11:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:06.033 02:11:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:06.033 02:11:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:06.292 [2024-07-14 02:11:11.870981] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:06.292 [2024-07-14 02:11:11.871129] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:06.292 TLSTESTn1 00:24:06.292 02:11:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:06.551 Running I/O for 10 seconds... 00:24:16.537 00:24:16.537 Latency(us) 00:24:16.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.537 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:16.537 Verification LBA range: start 0x0 length 0x2000 00:24:16.537 TLSTESTn1 : 10.06 1801.14 7.04 0.00 0.00 70858.94 9175.04 111071.38 00:24:16.537 =================================================================================================================== 00:24:16.537 Total : 1801.14 7.04 0.00 0.00 70858.94 9175.04 111071.38 00:24:16.538 0 00:24:16.538 02:11:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:16.538 02:11:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:16.538 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:24:16.538 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:24:16.538 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:16.538 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:16.538 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:16.538 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:16.538 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:16.538 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:16.538 nvmf_trace.0 00:24:16.796 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:24:16.796 02:11:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1635118 00:24:16.796 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1635118 ']' 00:24:16.796 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1635118 00:24:16.796 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:16.796 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.796 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1635118 00:24:16.796 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:16.796 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:16.796 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1635118' 00:24:16.796 killing process with pid 1635118 00:24:16.796 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1635118 00:24:16.796 Received shutdown signal, test time was about 10.000000 seconds 00:24:16.796 00:24:16.796 Latency(us) 00:24:16.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.796 =================================================================================================================== 00:24:16.796 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.796 [2024-07-14 02:11:22.270360] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:16.796 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1635118 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:17.056 rmmod nvme_tcp 00:24:17.056 rmmod nvme_fabrics 00:24:17.056 rmmod nvme_keyring 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1634971 ']' 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1634971 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1634971 ']' 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1634971 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1634971 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1634971' 00:24:17.056 killing process with pid 1634971 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1634971 00:24:17.056 [2024-07-14 02:11:22.578603] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:17.056 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1634971 00:24:17.317 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:17.317 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:17.317 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:17.317 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.317 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:17.317 02:11:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.317 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.317 02:11:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.252 02:11:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:19.252 02:11:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:19.252 00:24:19.252 real 0m16.695s 00:24:19.252 user 0m20.281s 00:24:19.252 sys 0m6.658s 00:24:19.252 02:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:19.252 02:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:19.252 ************************************ 00:24:19.252 END TEST nvmf_fips 00:24:19.252 ************************************ 00:24:19.252 02:11:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:19.252 02:11:24 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:19.252 02:11:24 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:19.252 02:11:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:19.252 02:11:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:19.252 02:11:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:19.252 ************************************ 00:24:19.252 START TEST nvmf_fuzz 00:24:19.252 ************************************ 00:24:19.252 02:11:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:19.513 * Looking for test storage... 00:24:19.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:19.513 02:11:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:21.417 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.417 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:21.418 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:21.418 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:21.418 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.418 02:11:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:21.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:24:21.418 00:24:21.418 --- 10.0.0.2 ping statistics --- 00:24:21.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.418 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:24:21.418 00:24:21.418 --- 10.0.0.1 ping statistics --- 00:24:21.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.418 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:21.418 02:11:27 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:21.679 02:11:27 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1638330 00:24:21.679 02:11:27 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:21.679 02:11:27 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:21.679 02:11:27 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1638330 00:24:21.679 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1638330 ']' 00:24:21.679 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.679 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:21.679 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.679 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:21.679 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:21.938 Malloc0 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:21.938 02:11:27 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:54.020 Fuzzing completed. Shutting down the fuzz application 00:24:54.020 00:24:54.020 Dumping successful admin opcodes: 00:24:54.020 8, 9, 10, 24, 00:24:54.020 Dumping successful io opcodes: 00:24:54.020 0, 9, 00:24:54.020 NS: 0x200003aeff00 I/O qp, Total commands completed: 448951, total successful commands: 2611, random_seed: 3416676864 00:24:54.020 NS: 0x200003aeff00 admin qp, Total commands completed: 56208, total successful commands: 447, random_seed: 4173622336 00:24:54.020 02:11:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:54.020 Fuzzing completed. Shutting down the fuzz application 00:24:54.020 00:24:54.020 Dumping successful admin opcodes: 00:24:54.020 24, 00:24:54.020 Dumping successful io opcodes: 00:24:54.020 00:24:54.020 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2876278707 00:24:54.020 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2876396934 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:54.020 rmmod nvme_tcp 00:24:54.020 rmmod nvme_fabrics 00:24:54.020 rmmod nvme_keyring 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1638330 ']' 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1638330 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1638330 ']' 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1638330 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1638330 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1638330' 00:24:54.020 killing process with pid 1638330 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1638330 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1638330 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.020 02:11:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.553 02:12:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:56.553 02:12:01 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:56.553 00:24:56.553 real 0m36.799s 00:24:56.553 user 0m50.545s 00:24:56.553 sys 0m15.240s 00:24:56.553 02:12:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:56.553 02:12:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:56.553 ************************************ 00:24:56.553 END TEST nvmf_fuzz 00:24:56.554 ************************************ 00:24:56.554 02:12:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:56.554 02:12:01 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:56.554 02:12:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:56.554 02:12:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:56.554 02:12:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:56.554 ************************************ 00:24:56.554 START TEST nvmf_multiconnection 00:24:56.554 ************************************ 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:56.554 * Looking for test storage... 00:24:56.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:56.554 02:12:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:58.452 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:58.452 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:58.452 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:58.452 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:58.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:24:58.452 00:24:58.452 --- 10.0.0.2 ping statistics --- 00:24:58.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.452 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:24:58.452 00:24:58.452 --- 10.0.0.1 ping statistics --- 00:24:58.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.452 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:58.452 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.453 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:58.453 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:58.453 02:12:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:58.453 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:58.453 02:12:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:58.453 02:12:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.453 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1644078 00:24:58.453 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1644078 00:24:58.453 02:12:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1644078 ']' 00:24:58.453 02:12:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:58.453 02:12:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.453 02:12:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:58.453 02:12:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.453 02:12:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:58.453 02:12:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.453 [2024-07-14 02:12:03.938122] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:58.453 [2024-07-14 02:12:03.938193] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.453 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.453 [2024-07-14 02:12:04.003409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.453 [2024-07-14 02:12:04.096204] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.453 [2024-07-14 02:12:04.096260] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.453 [2024-07-14 02:12:04.096273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.453 [2024-07-14 02:12:04.096283] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.453 [2024-07-14 02:12:04.096293] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.453 [2024-07-14 02:12:04.096355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.453 [2024-07-14 02:12:04.096444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.453 [2024-07-14 02:12:04.096512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.453 [2024-07-14 02:12:04.096515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.712 [2024-07-14 02:12:04.251787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.712 Malloc1 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.712 [2024-07-14 02:12:04.308989] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.712 Malloc2 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.712 Malloc3 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.712 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.971 Malloc4 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.971 Malloc5 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.971 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.972 Malloc6 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.972 Malloc7 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.972 Malloc8 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.972 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.231 Malloc9 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.231 Malloc10 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.231 Malloc11 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.231 02:12:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:00.166 02:12:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:00.166 02:12:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:00.166 02:12:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:00.166 02:12:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:00.166 02:12:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:02.079 02:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:02.079 02:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:02.079 02:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:02.079 02:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:02.079 02:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:02.079 02:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:02.079 02:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:02.079 02:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:02.687 02:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:02.687 02:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:02.687 02:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:02.687 02:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:02.687 02:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:04.591 02:12:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:04.591 02:12:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:04.591 02:12:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:04.591 02:12:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:04.591 02:12:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:04.591 02:12:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:04.591 02:12:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.591 02:12:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:05.527 02:12:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:05.527 02:12:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:05.527 02:12:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:05.527 02:12:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:05.527 02:12:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:07.429 02:12:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:07.429 02:12:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:07.429 02:12:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:07.429 02:12:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:07.429 02:12:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:07.429 02:12:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:07.429 02:12:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.429 02:12:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:07.996 02:12:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:07.996 02:12:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.996 02:12:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:07.996 02:12:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:07.996 02:12:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:09.895 02:12:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:09.895 02:12:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:09.895 02:12:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:09.895 02:12:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:09.895 02:12:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:09.895 02:12:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:09.895 02:12:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.895 02:12:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:10.830 02:12:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:10.830 02:12:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:10.830 02:12:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:10.830 02:12:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:10.830 02:12:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:12.735 02:12:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:12.735 02:12:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:12.735 02:12:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:12.735 02:12:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:12.735 02:12:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:12.735 02:12:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:12.735 02:12:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.735 02:12:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:13.674 02:12:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:13.674 02:12:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:13.674 02:12:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:13.674 02:12:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:13.674 02:12:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:15.579 02:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:15.579 02:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:15.579 02:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:15.579 02:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:15.579 02:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:15.579 02:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:15.579 02:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.579 02:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:16.513 02:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:16.513 02:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:16.513 02:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:16.513 02:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:16.513 02:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:18.415 02:12:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:18.415 02:12:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:18.415 02:12:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:18.415 02:12:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:18.415 02:12:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:18.415 02:12:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:18.415 02:12:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.415 02:12:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:19.394 02:12:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:19.394 02:12:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:19.394 02:12:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:19.394 02:12:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:19.394 02:12:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:21.295 02:12:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:21.295 02:12:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:21.295 02:12:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:21.295 02:12:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:21.295 02:12:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:21.295 02:12:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:21.295 02:12:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.295 02:12:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:22.228 02:12:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:22.228 02:12:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:22.228 02:12:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:22.228 02:12:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:22.228 02:12:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:24.131 02:12:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:24.131 02:12:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:24.131 02:12:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:24.131 02:12:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:24.131 02:12:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:24.131 02:12:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:24.131 02:12:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.131 02:12:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:25.067 02:12:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:25.067 02:12:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:25.067 02:12:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:25.067 02:12:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:25.067 02:12:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:26.971 02:12:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:26.971 02:12:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:26.971 02:12:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:26.971 02:12:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:26.971 02:12:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:26.971 02:12:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:26.971 02:12:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.971 02:12:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:27.904 02:12:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:27.904 02:12:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:27.904 02:12:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:27.904 02:12:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:27.904 02:12:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:30.442 02:12:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:30.442 02:12:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:30.442 02:12:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:30.442 02:12:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:30.442 02:12:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:30.442 02:12:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:30.442 02:12:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:30.442 [global] 00:25:30.442 thread=1 00:25:30.442 invalidate=1 00:25:30.442 rw=read 00:25:30.442 time_based=1 00:25:30.442 runtime=10 00:25:30.442 ioengine=libaio 00:25:30.442 direct=1 00:25:30.442 bs=262144 00:25:30.442 iodepth=64 00:25:30.442 norandommap=1 00:25:30.442 numjobs=1 00:25:30.442 00:25:30.442 [job0] 00:25:30.442 filename=/dev/nvme0n1 00:25:30.442 [job1] 00:25:30.442 filename=/dev/nvme10n1 00:25:30.442 [job2] 00:25:30.442 filename=/dev/nvme1n1 00:25:30.442 [job3] 00:25:30.442 filename=/dev/nvme2n1 00:25:30.442 [job4] 00:25:30.442 filename=/dev/nvme3n1 00:25:30.442 [job5] 00:25:30.442 filename=/dev/nvme4n1 00:25:30.442 [job6] 00:25:30.442 filename=/dev/nvme5n1 00:25:30.442 [job7] 00:25:30.442 filename=/dev/nvme6n1 00:25:30.442 [job8] 00:25:30.442 filename=/dev/nvme7n1 00:25:30.442 [job9] 00:25:30.442 filename=/dev/nvme8n1 00:25:30.442 [job10] 00:25:30.442 filename=/dev/nvme9n1 00:25:30.442 Could not set queue depth (nvme0n1) 00:25:30.442 Could not set queue depth (nvme10n1) 00:25:30.442 Could not set queue depth (nvme1n1) 00:25:30.442 Could not set queue depth (nvme2n1) 00:25:30.442 Could not set queue depth (nvme3n1) 00:25:30.442 Could not set queue depth (nvme4n1) 00:25:30.442 Could not set queue depth (nvme5n1) 00:25:30.442 Could not set queue depth (nvme6n1) 00:25:30.442 Could not set queue depth (nvme7n1) 00:25:30.442 Could not set queue depth (nvme8n1) 00:25:30.442 Could not set queue depth (nvme9n1) 00:25:30.442 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.442 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.442 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.442 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.442 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.442 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.442 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.442 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.442 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.442 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.442 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.442 fio-3.35 00:25:30.442 Starting 11 threads 00:25:42.651 00:25:42.651 job0: (groupid=0, jobs=1): err= 0: pid=1648835: Sun Jul 14 02:12:46 2024 00:25:42.651 read: IOPS=604, BW=151MiB/s (158MB/s)(1532MiB/10137msec) 00:25:42.651 slat (usec): min=8, max=115886, avg=1245.44, stdev=5057.02 00:25:42.651 clat (msec): min=2, max=611, avg=104.51, stdev=64.01 00:25:42.651 lat (msec): min=2, max=615, avg=105.75, stdev=64.72 00:25:42.651 clat percentiles (msec): 00:25:42.651 | 1.00th=[ 14], 5.00th=[ 34], 10.00th=[ 47], 20.00th=[ 59], 00:25:42.651 | 30.00th=[ 70], 40.00th=[ 84], 50.00th=[ 97], 60.00th=[ 109], 00:25:42.651 | 70.00th=[ 120], 80.00th=[ 138], 90.00th=[ 169], 95.00th=[ 209], 00:25:42.651 | 99.00th=[ 368], 99.50th=[ 531], 99.90th=[ 609], 99.95th=[ 609], 00:25:42.651 | 99.99th=[ 609] 00:25:42.651 bw ( KiB/s): min=45056, max=276992, per=9.30%, avg=155250.25, stdev=54775.92, samples=20 00:25:42.651 iops : min= 176, max= 1082, avg=606.40, stdev=213.98, samples=20 00:25:42.651 lat (msec) : 4=0.11%, 10=0.36%, 20=1.21%, 50=10.36%, 100=40.48% 00:25:42.651 lat (msec) : 250=45.23%, 500=1.75%, 750=0.51% 00:25:42.651 cpu : usr=0.33%, sys=1.77%, ctx=1517, majf=0, minf=4097 00:25:42.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:42.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.651 issued rwts: total=6129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.651 job1: (groupid=0, jobs=1): err= 0: pid=1648836: Sun Jul 14 02:12:46 2024 00:25:42.651 read: IOPS=913, BW=228MiB/s (240MB/s)(2313MiB/10124msec) 00:25:42.651 slat (usec): min=9, max=157866, avg=880.99, stdev=4038.65 00:25:42.651 clat (usec): min=1647, max=375138, avg=69110.58, stdev=45747.04 00:25:42.651 lat (usec): min=1695, max=417219, avg=69991.57, stdev=46274.05 00:25:42.651 clat percentiles (msec): 00:25:42.651 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 32], 20.00th=[ 41], 00:25:42.651 | 30.00th=[ 47], 40.00th=[ 54], 50.00th=[ 62], 60.00th=[ 68], 00:25:42.651 | 70.00th=[ 74], 80.00th=[ 87], 90.00th=[ 116], 95.00th=[ 157], 00:25:42.651 | 99.00th=[ 234], 99.50th=[ 351], 99.90th=[ 368], 99.95th=[ 376], 00:25:42.651 | 99.99th=[ 376] 00:25:42.651 bw ( KiB/s): min=70797, max=443904, per=14.08%, avg=235172.00, stdev=86851.54, samples=20 00:25:42.651 iops : min= 276, max= 1734, avg=918.60, stdev=339.32, samples=20 00:25:42.651 lat (msec) : 2=0.05%, 4=0.59%, 10=2.18%, 20=2.95%, 50=30.35% 00:25:42.651 lat (msec) : 100=50.30%, 250=12.61%, 500=0.96% 00:25:42.651 cpu : usr=0.47%, sys=2.88%, ctx=2103, majf=0, minf=4097 00:25:42.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:42.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.651 issued rwts: total=9250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.651 job2: (groupid=0, jobs=1): err= 0: pid=1648837: Sun Jul 14 02:12:46 2024 00:25:42.651 read: IOPS=526, BW=132MiB/s (138MB/s)(1333MiB/10133msec) 00:25:42.651 slat (usec): min=9, max=205307, avg=1312.59, stdev=6083.03 00:25:42.651 clat (usec): min=1018, max=606565, avg=120191.63, stdev=93111.95 00:25:42.651 lat (usec): min=1075, max=606577, avg=121504.22, stdev=93730.98 00:25:42.651 clat percentiles (msec): 00:25:42.651 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 28], 20.00th=[ 56], 00:25:42.651 | 30.00th=[ 66], 40.00th=[ 80], 50.00th=[ 93], 60.00th=[ 110], 00:25:42.651 | 70.00th=[ 140], 80.00th=[ 201], 90.00th=[ 232], 95.00th=[ 259], 00:25:42.651 | 99.00th=[ 502], 99.50th=[ 550], 99.90th=[ 600], 99.95th=[ 609], 00:25:42.651 | 99.99th=[ 609] 00:25:42.651 bw ( KiB/s): min=38400, max=259072, per=8.08%, avg=134904.80, stdev=65148.17, samples=20 00:25:42.651 iops : min= 150, max= 1012, avg=526.95, stdev=254.51, samples=20 00:25:42.651 lat (msec) : 2=0.21%, 4=0.66%, 10=1.76%, 20=5.29%, 50=8.29% 00:25:42.651 lat (msec) : 100=37.18%, 250=40.41%, 500=5.01%, 750=1.20% 00:25:42.651 cpu : usr=0.27%, sys=1.79%, ctx=1365, majf=0, minf=4097 00:25:42.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:42.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.651 issued rwts: total=5333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.651 job3: (groupid=0, jobs=1): err= 0: pid=1648844: Sun Jul 14 02:12:46 2024 00:25:42.651 read: IOPS=527, BW=132MiB/s (138MB/s)(1321MiB/10017msec) 00:25:42.651 slat (usec): min=10, max=308723, avg=1511.22, stdev=7668.15 00:25:42.651 clat (msec): min=2, max=649, avg=119.71, stdev=91.30 00:25:42.651 lat (msec): min=2, max=687, avg=121.22, stdev=92.46 00:25:42.651 clat percentiles (msec): 00:25:42.651 | 1.00th=[ 11], 5.00th=[ 34], 10.00th=[ 40], 20.00th=[ 49], 00:25:42.651 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 87], 60.00th=[ 116], 00:25:42.651 | 70.00th=[ 169], 80.00th=[ 199], 90.00th=[ 220], 95.00th=[ 251], 00:25:42.651 | 99.00th=[ 498], 99.50th=[ 575], 99.90th=[ 609], 99.95th=[ 617], 00:25:42.651 | 99.99th=[ 651] 00:25:42.651 bw ( KiB/s): min=34304, max=366080, per=8.01%, avg=133676.55, stdev=93466.07, samples=20 00:25:42.651 iops : min= 134, max= 1430, avg=522.15, stdev=365.12, samples=20 00:25:42.651 lat (msec) : 4=0.06%, 10=0.66%, 20=1.48%, 50=19.39%, 100=33.28% 00:25:42.651 lat (msec) : 250=40.23%, 500=3.95%, 750=0.95% 00:25:42.651 cpu : usr=0.31%, sys=1.79%, ctx=1196, majf=0, minf=4097 00:25:42.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:42.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.652 issued rwts: total=5285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.652 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.652 job4: (groupid=0, jobs=1): err= 0: pid=1648845: Sun Jul 14 02:12:46 2024 00:25:42.652 read: IOPS=521, BW=130MiB/s (137MB/s)(1315MiB/10092msec) 00:25:42.652 slat (usec): min=14, max=175172, avg=1853.38, stdev=6904.05 00:25:42.652 clat (msec): min=5, max=508, avg=120.81, stdev=72.77 00:25:42.652 lat (msec): min=5, max=550, avg=122.66, stdev=74.00 00:25:42.652 clat percentiles (msec): 00:25:42.652 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 54], 00:25:42.652 | 30.00th=[ 77], 40.00th=[ 92], 50.00th=[ 106], 60.00th=[ 128], 00:25:42.652 | 70.00th=[ 146], 80.00th=[ 188], 90.00th=[ 213], 95.00th=[ 234], 00:25:42.652 | 99.00th=[ 397], 99.50th=[ 443], 99.90th=[ 498], 99.95th=[ 510], 00:25:42.652 | 99.99th=[ 510] 00:25:42.652 bw ( KiB/s): min=32256, max=348672, per=7.97%, avg=133034.25, stdev=75805.49, samples=20 00:25:42.652 iops : min= 126, max= 1362, avg=519.65, stdev=296.12, samples=20 00:25:42.652 lat (msec) : 10=0.29%, 20=0.30%, 50=17.70%, 100=27.45%, 250=51.30% 00:25:42.652 lat (msec) : 500=2.87%, 750=0.10% 00:25:42.652 cpu : usr=0.28%, sys=1.88%, ctx=1067, majf=0, minf=4097 00:25:42.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:42.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.652 issued rwts: total=5261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.652 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.652 job5: (groupid=0, jobs=1): err= 0: pid=1648846: Sun Jul 14 02:12:46 2024 00:25:42.652 read: IOPS=531, BW=133MiB/s (139MB/s)(1342MiB/10095msec) 00:25:42.652 slat (usec): min=9, max=357995, avg=900.35, stdev=7640.07 00:25:42.652 clat (usec): min=1938, max=612605, avg=119341.01, stdev=90907.64 00:25:42.652 lat (usec): min=1961, max=612629, avg=120241.37, stdev=91912.77 00:25:42.652 clat percentiles (msec): 00:25:42.652 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 21], 20.00th=[ 26], 00:25:42.652 | 30.00th=[ 37], 40.00th=[ 84], 50.00th=[ 109], 60.00th=[ 138], 00:25:42.652 | 70.00th=[ 171], 80.00th=[ 207], 90.00th=[ 230], 95.00th=[ 255], 00:25:42.652 | 99.00th=[ 447], 99.50th=[ 493], 99.90th=[ 531], 99.95th=[ 609], 00:25:42.652 | 99.99th=[ 617] 00:25:42.652 bw ( KiB/s): min=31232, max=452096, per=8.13%, avg=135800.80, stdev=96509.11, samples=20 00:25:42.652 iops : min= 122, max= 1766, avg=530.45, stdev=377.00, samples=20 00:25:42.652 lat (msec) : 2=0.04%, 4=0.95%, 10=3.13%, 20=5.85%, 50=22.43% 00:25:42.652 lat (msec) : 100=13.22%, 250=48.71%, 500=5.55%, 750=0.13% 00:25:42.652 cpu : usr=0.22%, sys=1.71%, ctx=1674, majf=0, minf=3722 00:25:42.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:42.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.652 issued rwts: total=5369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.652 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.652 job6: (groupid=0, jobs=1): err= 0: pid=1648847: Sun Jul 14 02:12:46 2024 00:25:42.652 read: IOPS=638, BW=160MiB/s (167MB/s)(1598MiB/10017msec) 00:25:42.652 slat (usec): min=9, max=131973, avg=1241.04, stdev=4637.13 00:25:42.652 clat (msec): min=3, max=598, avg=98.99, stdev=63.52 00:25:42.652 lat (msec): min=3, max=598, avg=100.23, stdev=64.19 00:25:42.652 clat percentiles (msec): 00:25:42.652 | 1.00th=[ 20], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 61], 00:25:42.652 | 30.00th=[ 68], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 92], 00:25:42.652 | 70.00th=[ 104], 80.00th=[ 120], 90.00th=[ 169], 95.00th=[ 220], 00:25:42.652 | 99.00th=[ 342], 99.50th=[ 514], 99.90th=[ 592], 99.95th=[ 592], 00:25:42.652 | 99.99th=[ 600] 00:25:42.652 bw ( KiB/s): min=43008, max=300544, per=9.70%, avg=162022.40, stdev=64147.50, samples=20 00:25:42.652 iops : min= 168, max= 1174, avg=632.90, stdev=250.58, samples=20 00:25:42.652 lat (msec) : 4=0.11%, 10=0.39%, 20=0.53%, 50=6.05%, 100=60.17% 00:25:42.652 lat (msec) : 250=29.96%, 500=2.24%, 750=0.55% 00:25:42.652 cpu : usr=0.52%, sys=2.07%, ctx=1567, majf=0, minf=4097 00:25:42.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:42.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.652 issued rwts: total=6392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.652 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.652 job7: (groupid=0, jobs=1): err= 0: pid=1648848: Sun Jul 14 02:12:46 2024 00:25:42.652 read: IOPS=506, BW=127MiB/s (133MB/s)(1277MiB/10092msec) 00:25:42.652 slat (usec): min=9, max=246211, avg=1521.22, stdev=6946.32 00:25:42.652 clat (msec): min=2, max=571, avg=124.86, stdev=77.20 00:25:42.652 lat (msec): min=2, max=571, avg=126.38, stdev=78.16 00:25:42.652 clat percentiles (msec): 00:25:42.652 | 1.00th=[ 10], 5.00th=[ 35], 10.00th=[ 47], 20.00th=[ 68], 00:25:42.652 | 30.00th=[ 79], 40.00th=[ 90], 50.00th=[ 106], 60.00th=[ 123], 00:25:42.652 | 70.00th=[ 144], 80.00th=[ 184], 90.00th=[ 241], 95.00th=[ 262], 00:25:42.652 | 99.00th=[ 414], 99.50th=[ 460], 99.90th=[ 502], 99.95th=[ 542], 00:25:42.652 | 99.99th=[ 575] 00:25:42.652 bw ( KiB/s): min=31232, max=318464, per=7.73%, avg=129091.00, stdev=69105.88, samples=20 00:25:42.652 iops : min= 122, max= 1244, avg=504.25, stdev=269.95, samples=20 00:25:42.652 lat (msec) : 4=0.08%, 10=1.08%, 20=1.23%, 50=8.71%, 100=35.72% 00:25:42.652 lat (msec) : 250=46.15%, 500=6.76%, 750=0.27% 00:25:42.652 cpu : usr=0.30%, sys=1.62%, ctx=1275, majf=0, minf=4097 00:25:42.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:42.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.652 issued rwts: total=5107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.652 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.652 job8: (groupid=0, jobs=1): err= 0: pid=1648849: Sun Jul 14 02:12:46 2024 00:25:42.652 read: IOPS=678, BW=170MiB/s (178MB/s)(1712MiB/10087msec) 00:25:42.652 slat (usec): min=8, max=226011, avg=1003.32, stdev=5927.79 00:25:42.652 clat (usec): min=1426, max=806940, avg=93184.62, stdev=83817.14 00:25:42.652 lat (usec): min=1514, max=806955, avg=94187.94, stdev=84475.73 00:25:42.652 clat percentiles (msec): 00:25:42.652 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 21], 20.00th=[ 35], 00:25:42.652 | 30.00th=[ 44], 40.00th=[ 54], 50.00th=[ 71], 60.00th=[ 88], 00:25:42.652 | 70.00th=[ 110], 80.00th=[ 134], 90.00th=[ 199], 95.00th=[ 257], 00:25:42.652 | 99.00th=[ 388], 99.50th=[ 609], 99.90th=[ 625], 99.95th=[ 625], 00:25:42.652 | 99.99th=[ 810] 00:25:42.652 bw ( KiB/s): min=46172, max=418304, per=10.40%, avg=173688.65, stdev=98670.25, samples=20 00:25:42.652 iops : min= 180, max= 1634, avg=678.45, stdev=385.45, samples=20 00:25:42.652 lat (msec) : 2=0.01%, 4=0.12%, 10=1.07%, 20=8.07%, 50=27.13% 00:25:42.652 lat (msec) : 100=29.23%, 250=28.70%, 500=5.01%, 750=0.64%, 1000=0.01% 00:25:42.652 cpu : usr=0.40%, sys=2.00%, ctx=1884, majf=0, minf=4097 00:25:42.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:42.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.652 issued rwts: total=6849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.652 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.652 job9: (groupid=0, jobs=1): err= 0: pid=1648850: Sun Jul 14 02:12:46 2024 00:25:42.652 read: IOPS=552, BW=138MiB/s (145MB/s)(1398MiB/10130msec) 00:25:42.652 slat (usec): min=9, max=225512, avg=1560.01, stdev=7108.43 00:25:42.652 clat (msec): min=5, max=657, avg=114.27, stdev=76.37 00:25:42.652 lat (msec): min=5, max=657, avg=115.83, stdev=77.14 00:25:42.652 clat percentiles (msec): 00:25:42.652 | 1.00th=[ 9], 5.00th=[ 22], 10.00th=[ 37], 20.00th=[ 60], 00:25:42.652 | 30.00th=[ 77], 40.00th=[ 92], 50.00th=[ 107], 60.00th=[ 122], 00:25:42.652 | 70.00th=[ 136], 80.00th=[ 153], 90.00th=[ 178], 95.00th=[ 241], 00:25:42.652 | 99.00th=[ 439], 99.50th=[ 609], 99.90th=[ 642], 99.95th=[ 642], 00:25:42.652 | 99.99th=[ 659] 00:25:42.652 bw ( KiB/s): min=37376, max=257024, per=8.48%, avg=141531.05, stdev=52595.11, samples=20 00:25:42.652 iops : min= 146, max= 1004, avg=552.85, stdev=205.45, samples=20 00:25:42.652 lat (msec) : 10=1.66%, 20=3.02%, 50=10.48%, 100=30.42%, 250=50.46% 00:25:42.652 lat (msec) : 500=3.02%, 750=0.93% 00:25:42.652 cpu : usr=0.30%, sys=1.98%, ctx=1271, majf=0, minf=4097 00:25:42.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:42.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.653 issued rwts: total=5592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.653 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.653 job10: (groupid=0, jobs=1): err= 0: pid=1648851: Sun Jul 14 02:12:46 2024 00:25:42.653 read: IOPS=547, BW=137MiB/s (144MB/s)(1388MiB/10131msec) 00:25:42.653 slat (usec): min=9, max=291113, avg=1128.16, stdev=8238.06 00:25:42.653 clat (msec): min=2, max=505, avg=115.58, stdev=92.66 00:25:42.653 lat (msec): min=2, max=675, avg=116.70, stdev=93.64 00:25:42.653 clat percentiles (msec): 00:25:42.653 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 28], 20.00th=[ 41], 00:25:42.653 | 30.00th=[ 55], 40.00th=[ 65], 50.00th=[ 79], 60.00th=[ 113], 00:25:42.653 | 70.00th=[ 144], 80.00th=[ 199], 90.00th=[ 239], 95.00th=[ 309], 00:25:42.653 | 99.00th=[ 401], 99.50th=[ 477], 99.90th=[ 506], 99.95th=[ 506], 00:25:42.653 | 99.99th=[ 506] 00:25:42.653 bw ( KiB/s): min=39936, max=282624, per=8.41%, avg=140478.85, stdev=74030.78, samples=20 00:25:42.653 iops : min= 156, max= 1104, avg=548.70, stdev=289.18, samples=20 00:25:42.653 lat (msec) : 4=0.31%, 10=1.39%, 20=4.40%, 50=20.28%, 100=30.50% 00:25:42.653 lat (msec) : 250=35.20%, 500=7.80%, 750=0.13% 00:25:42.653 cpu : usr=0.24%, sys=1.58%, ctx=1523, majf=0, minf=4097 00:25:42.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:42.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.653 issued rwts: total=5551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.653 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.653 00:25:42.653 Run status group 0 (all jobs): 00:25:42.653 READ: bw=1631MiB/s (1710MB/s), 127MiB/s-228MiB/s (133MB/s-240MB/s), io=16.1GiB (17.3GB), run=10017-10137msec 00:25:42.653 00:25:42.653 Disk stats (read/write): 00:25:42.653 nvme0n1: ios=12034/0, merge=0/0, ticks=1229775/0, in_queue=1229775, util=97.19% 00:25:42.653 nvme10n1: ios=18309/0, merge=0/0, ticks=1233337/0, in_queue=1233337, util=97.40% 00:25:42.653 nvme1n1: ios=10456/0, merge=0/0, ticks=1237389/0, in_queue=1237389, util=97.69% 00:25:42.653 nvme2n1: ios=10341/0, merge=0/0, ticks=1237221/0, in_queue=1237221, util=97.84% 00:25:42.653 nvme3n1: ios=10305/0, merge=0/0, ticks=1228136/0, in_queue=1228136, util=97.93% 00:25:42.653 nvme4n1: ios=10556/0, merge=0/0, ticks=1241112/0, in_queue=1241112, util=98.27% 00:25:42.653 nvme5n1: ios=12562/0, merge=0/0, ticks=1232091/0, in_queue=1232091, util=98.40% 00:25:42.653 nvme6n1: ios=10009/0, merge=0/0, ticks=1229190/0, in_queue=1229190, util=98.52% 00:25:42.653 nvme7n1: ios=13514/0, merge=0/0, ticks=1236933/0, in_queue=1236933, util=98.90% 00:25:42.653 nvme8n1: ios=11052/0, merge=0/0, ticks=1233282/0, in_queue=1233282, util=99.07% 00:25:42.653 nvme9n1: ios=10952/0, merge=0/0, ticks=1243690/0, in_queue=1243690, util=99.20% 00:25:42.653 02:12:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:42.653 [global] 00:25:42.653 thread=1 00:25:42.653 invalidate=1 00:25:42.653 rw=randwrite 00:25:42.653 time_based=1 00:25:42.653 runtime=10 00:25:42.653 ioengine=libaio 00:25:42.653 direct=1 00:25:42.653 bs=262144 00:25:42.653 iodepth=64 00:25:42.653 norandommap=1 00:25:42.653 numjobs=1 00:25:42.653 00:25:42.653 [job0] 00:25:42.653 filename=/dev/nvme0n1 00:25:42.653 [job1] 00:25:42.653 filename=/dev/nvme10n1 00:25:42.653 [job2] 00:25:42.653 filename=/dev/nvme1n1 00:25:42.653 [job3] 00:25:42.653 filename=/dev/nvme2n1 00:25:42.653 [job4] 00:25:42.653 filename=/dev/nvme3n1 00:25:42.653 [job5] 00:25:42.653 filename=/dev/nvme4n1 00:25:42.653 [job6] 00:25:42.653 filename=/dev/nvme5n1 00:25:42.653 [job7] 00:25:42.653 filename=/dev/nvme6n1 00:25:42.653 [job8] 00:25:42.653 filename=/dev/nvme7n1 00:25:42.653 [job9] 00:25:42.653 filename=/dev/nvme8n1 00:25:42.653 [job10] 00:25:42.653 filename=/dev/nvme9n1 00:25:42.653 Could not set queue depth (nvme0n1) 00:25:42.653 Could not set queue depth (nvme10n1) 00:25:42.653 Could not set queue depth (nvme1n1) 00:25:42.653 Could not set queue depth (nvme2n1) 00:25:42.653 Could not set queue depth (nvme3n1) 00:25:42.653 Could not set queue depth (nvme4n1) 00:25:42.653 Could not set queue depth (nvme5n1) 00:25:42.653 Could not set queue depth (nvme6n1) 00:25:42.653 Could not set queue depth (nvme7n1) 00:25:42.653 Could not set queue depth (nvme8n1) 00:25:42.653 Could not set queue depth (nvme9n1) 00:25:42.653 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.653 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.653 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.653 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.653 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.653 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.653 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.653 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.653 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.653 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.653 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.653 fio-3.35 00:25:42.653 Starting 11 threads 00:25:52.683 00:25:52.683 job0: (groupid=0, jobs=1): err= 0: pid=1649871: Sun Jul 14 02:12:57 2024 00:25:52.683 write: IOPS=476, BW=119MiB/s (125MB/s)(1200MiB/10081msec); 0 zone resets 00:25:52.683 slat (usec): min=26, max=39258, avg=1849.96, stdev=3789.56 00:25:52.683 clat (msec): min=2, max=672, avg=132.53, stdev=40.14 00:25:52.683 lat (msec): min=2, max=672, avg=134.38, stdev=40.67 00:25:52.683 clat percentiles (msec): 00:25:52.683 | 1.00th=[ 13], 5.00th=[ 61], 10.00th=[ 85], 20.00th=[ 96], 00:25:52.683 | 30.00th=[ 120], 40.00th=[ 133], 50.00th=[ 140], 60.00th=[ 146], 00:25:52.683 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 178], 95.00th=[ 192], 00:25:52.683 | 99.00th=[ 220], 99.50th=[ 224], 99.90th=[ 228], 99.95th=[ 230], 00:25:52.683 | 99.99th=[ 676] 00:25:52.683 bw ( KiB/s): min=83968, max=194682, per=10.25%, avg=121212.90, stdev=29232.18, samples=20 00:25:52.683 iops : min= 328, max= 760, avg=473.45, stdev=114.14, samples=20 00:25:52.683 lat (msec) : 4=0.17%, 10=0.52%, 20=0.92%, 50=1.67%, 100=18.80% 00:25:52.683 lat (msec) : 250=77.91%, 750=0.02% 00:25:52.683 cpu : usr=1.56%, sys=1.43%, ctx=1774, majf=0, minf=1 00:25:52.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:52.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.683 issued rwts: total=0,4799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.683 job1: (groupid=0, jobs=1): err= 0: pid=1649872: Sun Jul 14 02:12:57 2024 00:25:52.683 write: IOPS=483, BW=121MiB/s (127MB/s)(1224MiB/10129msec); 0 zone resets 00:25:52.683 slat (usec): min=25, max=100026, avg=1949.78, stdev=4561.74 00:25:52.683 clat (msec): min=4, max=274, avg=130.40, stdev=42.26 00:25:52.683 lat (msec): min=4, max=274, avg=132.35, stdev=42.76 00:25:52.683 clat percentiles (msec): 00:25:52.683 | 1.00th=[ 38], 5.00th=[ 77], 10.00th=[ 92], 20.00th=[ 104], 00:25:52.683 | 30.00th=[ 107], 40.00th=[ 109], 50.00th=[ 121], 60.00th=[ 133], 00:25:52.683 | 70.00th=[ 142], 80.00th=[ 161], 90.00th=[ 190], 95.00th=[ 226], 00:25:52.683 | 99.00th=[ 249], 99.50th=[ 251], 99.90th=[ 266], 99.95th=[ 268], 00:25:52.683 | 99.99th=[ 275] 00:25:52.683 bw ( KiB/s): min=67584, max=182784, per=10.46%, avg=123697.55, stdev=32889.37, samples=20 00:25:52.683 iops : min= 264, max= 714, avg=483.15, stdev=128.43, samples=20 00:25:52.683 lat (msec) : 10=0.14%, 20=0.25%, 50=1.00%, 100=12.52%, 250=85.60% 00:25:52.683 lat (msec) : 500=0.49% 00:25:52.683 cpu : usr=1.42%, sys=1.56%, ctx=1475, majf=0, minf=1 00:25:52.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:52.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.683 issued rwts: total=0,4896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.683 job2: (groupid=0, jobs=1): err= 0: pid=1649873: Sun Jul 14 02:12:57 2024 00:25:52.683 write: IOPS=500, BW=125MiB/s (131MB/s)(1269MiB/10150msec); 0 zone resets 00:25:52.683 slat (usec): min=19, max=84916, avg=978.48, stdev=3504.82 00:25:52.683 clat (usec): min=1360, max=418582, avg=126868.75, stdev=73108.99 00:25:52.683 lat (usec): min=1406, max=418622, avg=127847.24, stdev=73714.67 00:25:52.683 clat percentiles (msec): 00:25:52.683 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 21], 20.00th=[ 53], 00:25:52.683 | 30.00th=[ 95], 40.00th=[ 117], 50.00th=[ 136], 60.00th=[ 146], 00:25:52.683 | 70.00th=[ 157], 80.00th=[ 178], 90.00th=[ 218], 95.00th=[ 243], 00:25:52.683 | 99.00th=[ 342], 99.50th=[ 363], 99.90th=[ 418], 99.95th=[ 418], 00:25:52.683 | 99.99th=[ 418] 00:25:52.683 bw ( KiB/s): min=77824, max=196608, per=10.85%, avg=128345.90, stdev=37013.76, samples=20 00:25:52.683 iops : min= 304, max= 768, avg=501.30, stdev=144.65, samples=20 00:25:52.683 lat (msec) : 2=0.10%, 4=0.75%, 10=3.35%, 20=5.52%, 50=9.77% 00:25:52.683 lat (msec) : 100=12.41%, 250=63.94%, 500=4.18% 00:25:52.683 cpu : usr=1.32%, sys=1.93%, ctx=3487, majf=0, minf=1 00:25:52.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:52.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.683 issued rwts: total=0,5077,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.683 job3: (groupid=0, jobs=1): err= 0: pid=1649874: Sun Jul 14 02:12:57 2024 00:25:52.683 write: IOPS=308, BW=77.1MiB/s (80.9MB/s)(789MiB/10228msec); 0 zone resets 00:25:52.683 slat (usec): min=18, max=185021, avg=2603.76, stdev=7990.66 00:25:52.683 clat (usec): min=1773, max=630015, avg=204677.19, stdev=118853.72 00:25:52.683 lat (usec): min=1809, max=630047, avg=207280.95, stdev=120572.35 00:25:52.683 clat percentiles (msec): 00:25:52.683 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 34], 20.00th=[ 91], 00:25:52.683 | 30.00th=[ 150], 40.00th=[ 178], 50.00th=[ 188], 60.00th=[ 220], 00:25:52.683 | 70.00th=[ 271], 80.00th=[ 326], 90.00th=[ 368], 95.00th=[ 384], 00:25:52.683 | 99.00th=[ 468], 99.50th=[ 558], 99.90th=[ 609], 99.95th=[ 634], 00:25:52.683 | 99.99th=[ 634] 00:25:52.683 bw ( KiB/s): min=40960, max=220231, per=6.69%, avg=79131.25, stdev=47106.92, samples=20 00:25:52.683 iops : min= 160, max= 860, avg=309.05, stdev=183.99, samples=20 00:25:52.683 lat (msec) : 2=0.03%, 4=0.44%, 10=2.41%, 20=2.57%, 50=7.16% 00:25:52.683 lat (msec) : 100=9.51%, 250=42.88%, 500=34.17%, 750=0.82% 00:25:52.683 cpu : usr=0.86%, sys=0.91%, ctx=1721, majf=0, minf=1 00:25:52.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:25:52.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.683 issued rwts: total=0,3155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.683 job4: (groupid=0, jobs=1): err= 0: pid=1649875: Sun Jul 14 02:12:57 2024 00:25:52.683 write: IOPS=320, BW=80.2MiB/s (84.0MB/s)(821MiB/10237msec); 0 zone resets 00:25:52.683 slat (usec): min=21, max=107771, avg=3029.97, stdev=7106.25 00:25:52.683 clat (msec): min=7, max=545, avg=196.49, stdev=109.22 00:25:52.683 lat (msec): min=7, max=545, avg=199.52, stdev=110.64 00:25:52.683 clat percentiles (msec): 00:25:52.683 | 1.00th=[ 57], 5.00th=[ 81], 10.00th=[ 92], 20.00th=[ 105], 00:25:52.683 | 30.00th=[ 117], 40.00th=[ 136], 50.00th=[ 148], 60.00th=[ 180], 00:25:52.683 | 70.00th=[ 247], 80.00th=[ 321], 90.00th=[ 368], 95.00th=[ 409], 00:25:52.683 | 99.00th=[ 456], 99.50th=[ 468], 99.90th=[ 514], 99.95th=[ 550], 00:25:52.683 | 99.99th=[ 550] 00:25:52.683 bw ( KiB/s): min=38912, max=183296, per=6.96%, avg=82376.60, stdev=43304.06, samples=20 00:25:52.683 iops : min= 152, max= 716, avg=321.70, stdev=169.11, samples=20 00:25:52.683 lat (msec) : 10=0.03%, 20=0.12%, 50=0.67%, 100=16.42%, 250=53.60% 00:25:52.683 lat (msec) : 500=28.98%, 750=0.18% 00:25:52.683 cpu : usr=0.95%, sys=0.98%, ctx=886, majf=0, minf=1 00:25:52.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:52.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.683 issued rwts: total=0,3282,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.683 job5: (groupid=0, jobs=1): err= 0: pid=1649887: Sun Jul 14 02:12:57 2024 00:25:52.683 write: IOPS=292, BW=73.2MiB/s (76.8MB/s)(749MiB/10229msec); 0 zone resets 00:25:52.683 slat (usec): min=21, max=59032, avg=3187.92, stdev=6947.95 00:25:52.683 clat (msec): min=10, max=544, avg=215.13, stdev=92.68 00:25:52.683 lat (msec): min=10, max=544, avg=218.32, stdev=93.80 00:25:52.683 clat percentiles (msec): 00:25:52.683 | 1.00th=[ 39], 5.00th=[ 102], 10.00th=[ 117], 20.00th=[ 136], 00:25:52.683 | 30.00th=[ 146], 40.00th=[ 159], 50.00th=[ 188], 60.00th=[ 232], 00:25:52.683 | 70.00th=[ 271], 80.00th=[ 317], 90.00th=[ 342], 95.00th=[ 368], 00:25:52.684 | 99.00th=[ 418], 99.50th=[ 464], 99.90th=[ 527], 99.95th=[ 542], 00:25:52.684 | 99.99th=[ 542] 00:25:52.684 bw ( KiB/s): min=43008, max=120832, per=6.35%, avg=75092.45, stdev=26481.90, samples=20 00:25:52.684 iops : min= 168, max= 472, avg=293.30, stdev=103.44, samples=20 00:25:52.684 lat (msec) : 20=0.23%, 50=1.37%, 100=3.30%, 250=59.76%, 500=35.00% 00:25:52.684 lat (msec) : 750=0.33% 00:25:52.684 cpu : usr=0.85%, sys=0.87%, ctx=914, majf=0, minf=1 00:25:52.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:25:52.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.684 issued rwts: total=0,2997,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.684 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.684 job6: (groupid=0, jobs=1): err= 0: pid=1649888: Sun Jul 14 02:12:57 2024 00:25:52.684 write: IOPS=385, BW=96.4MiB/s (101MB/s)(987MiB/10238msec); 0 zone resets 00:25:52.684 slat (usec): min=16, max=146431, avg=2305.25, stdev=7095.65 00:25:52.684 clat (usec): min=1314, max=589592, avg=163536.98, stdev=115394.39 00:25:52.684 lat (usec): min=1355, max=589641, avg=165842.22, stdev=116870.62 00:25:52.684 clat percentiles (msec): 00:25:52.684 | 1.00th=[ 6], 5.00th=[ 28], 10.00th=[ 57], 20.00th=[ 72], 00:25:52.684 | 30.00th=[ 75], 40.00th=[ 125], 50.00th=[ 136], 60.00th=[ 155], 00:25:52.684 | 70.00th=[ 188], 80.00th=[ 228], 90.00th=[ 376], 95.00th=[ 414], 00:25:52.684 | 99.00th=[ 485], 99.50th=[ 502], 99.90th=[ 575], 99.95th=[ 592], 00:25:52.684 | 99.99th=[ 592] 00:25:52.684 bw ( KiB/s): min=36864, max=246784, per=8.41%, avg=99435.40, stdev=58169.00, samples=20 00:25:52.684 iops : min= 144, max= 964, avg=388.40, stdev=227.22, samples=20 00:25:52.684 lat (msec) : 2=0.38%, 4=0.20%, 10=1.54%, 20=1.90%, 50=2.89% 00:25:52.684 lat (msec) : 100=28.16%, 250=48.06%, 500=16.31%, 750=0.56% 00:25:52.684 cpu : usr=1.10%, sys=1.19%, ctx=1492, majf=0, minf=1 00:25:52.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:52.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.684 issued rwts: total=0,3949,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.684 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.684 job7: (groupid=0, jobs=1): err= 0: pid=1649889: Sun Jul 14 02:12:57 2024 00:25:52.684 write: IOPS=549, BW=137MiB/s (144MB/s)(1391MiB/10122msec); 0 zone resets 00:25:52.684 slat (usec): min=25, max=165865, avg=1387.39, stdev=3994.65 00:25:52.684 clat (msec): min=4, max=500, avg=114.99, stdev=55.64 00:25:52.684 lat (msec): min=5, max=500, avg=116.38, stdev=55.93 00:25:52.684 clat percentiles (msec): 00:25:52.684 | 1.00th=[ 20], 5.00th=[ 56], 10.00th=[ 67], 20.00th=[ 74], 00:25:52.684 | 30.00th=[ 83], 40.00th=[ 100], 50.00th=[ 106], 60.00th=[ 109], 00:25:52.684 | 70.00th=[ 123], 80.00th=[ 150], 90.00th=[ 192], 95.00th=[ 226], 00:25:52.684 | 99.00th=[ 292], 99.50th=[ 380], 99.90th=[ 477], 99.95th=[ 493], 00:25:52.684 | 99.99th=[ 502] 00:25:52.684 bw ( KiB/s): min=68471, max=219136, per=11.90%, avg=140762.15, stdev=38029.00, samples=20 00:25:52.684 iops : min= 267, max= 856, avg=549.75, stdev=148.62, samples=20 00:25:52.684 lat (msec) : 10=0.11%, 20=0.92%, 50=3.29%, 100=36.46%, 250=57.48% 00:25:52.684 lat (msec) : 500=1.73%, 750=0.02% 00:25:52.684 cpu : usr=1.74%, sys=1.98%, ctx=2472, majf=0, minf=1 00:25:52.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:52.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.684 issued rwts: total=0,5562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.684 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.684 job8: (groupid=0, jobs=1): err= 0: pid=1649890: Sun Jul 14 02:12:57 2024 00:25:52.684 write: IOPS=325, BW=81.4MiB/s (85.3MB/s)(833MiB/10235msec); 0 zone resets 00:25:52.684 slat (usec): min=20, max=116255, avg=2063.57, stdev=6523.29 00:25:52.684 clat (usec): min=1924, max=557426, avg=194470.21, stdev=120420.46 00:25:52.684 lat (usec): min=1971, max=557468, avg=196533.78, stdev=121592.41 00:25:52.684 clat percentiles (msec): 00:25:52.684 | 1.00th=[ 5], 5.00th=[ 18], 10.00th=[ 35], 20.00th=[ 74], 00:25:52.684 | 30.00th=[ 107], 40.00th=[ 159], 50.00th=[ 186], 60.00th=[ 215], 00:25:52.684 | 70.00th=[ 251], 80.00th=[ 317], 90.00th=[ 380], 95.00th=[ 405], 00:25:52.684 | 99.00th=[ 447], 99.50th=[ 472], 99.90th=[ 527], 99.95th=[ 550], 00:25:52.684 | 99.99th=[ 558] 00:25:52.684 bw ( KiB/s): min=38912, max=204288, per=7.07%, avg=83644.20, stdev=42178.49, samples=20 00:25:52.684 iops : min= 152, max= 798, avg=326.65, stdev=164.71, samples=20 00:25:52.684 lat (msec) : 2=0.06%, 4=0.75%, 10=1.98%, 20=2.79%, 50=6.87% 00:25:52.684 lat (msec) : 100=17.11%, 250=39.96%, 500=30.32%, 750=0.15% 00:25:52.684 cpu : usr=1.06%, sys=1.03%, ctx=1891, majf=0, minf=1 00:25:52.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:52.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.684 issued rwts: total=0,3331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.684 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.684 job9: (groupid=0, jobs=1): err= 0: pid=1649891: Sun Jul 14 02:12:57 2024 00:25:52.684 write: IOPS=488, BW=122MiB/s (128MB/s)(1231MiB/10080msec); 0 zone resets 00:25:52.684 slat (usec): min=17, max=132658, avg=1638.95, stdev=4348.27 00:25:52.684 clat (msec): min=3, max=500, avg=129.34, stdev=61.09 00:25:52.684 lat (msec): min=3, max=500, avg=130.98, stdev=61.91 00:25:52.684 clat percentiles (msec): 00:25:52.684 | 1.00th=[ 14], 5.00th=[ 35], 10.00th=[ 63], 20.00th=[ 82], 00:25:52.684 | 30.00th=[ 89], 40.00th=[ 111], 50.00th=[ 127], 60.00th=[ 146], 00:25:52.684 | 70.00th=[ 155], 80.00th=[ 176], 90.00th=[ 192], 95.00th=[ 230], 00:25:52.684 | 99.00th=[ 355], 99.50th=[ 372], 99.90th=[ 409], 99.95th=[ 414], 00:25:52.684 | 99.99th=[ 502] 00:25:52.684 bw ( KiB/s): min=55296, max=219209, per=10.52%, avg=124379.45, stdev=40407.23, samples=20 00:25:52.684 iops : min= 216, max= 856, avg=485.80, stdev=157.78, samples=20 00:25:52.684 lat (msec) : 4=0.12%, 10=0.39%, 20=1.08%, 50=5.79%, 100=27.34% 00:25:52.684 lat (msec) : 250=61.79%, 500=3.47%, 750=0.02% 00:25:52.684 cpu : usr=1.59%, sys=1.57%, ctx=2244, majf=0, minf=1 00:25:52.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:52.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.684 issued rwts: total=0,4923,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.684 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.684 job10: (groupid=0, jobs=1): err= 0: pid=1649892: Sun Jul 14 02:12:57 2024 00:25:52.684 write: IOPS=526, BW=132MiB/s (138MB/s)(1333MiB/10116msec); 0 zone resets 00:25:52.684 slat (usec): min=20, max=159929, avg=1641.75, stdev=4744.42 00:25:52.684 clat (msec): min=3, max=378, avg=119.58, stdev=53.41 00:25:52.684 lat (msec): min=3, max=378, avg=121.22, stdev=53.95 00:25:52.684 clat percentiles (msec): 00:25:52.684 | 1.00th=[ 13], 5.00th=[ 32], 10.00th=[ 59], 20.00th=[ 84], 00:25:52.684 | 30.00th=[ 97], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 113], 00:25:52.684 | 70.00th=[ 142], 80.00th=[ 169], 90.00th=[ 184], 95.00th=[ 205], 00:25:52.684 | 99.00th=[ 300], 99.50th=[ 326], 99.90th=[ 355], 99.95th=[ 359], 00:25:52.684 | 99.99th=[ 380] 00:25:52.684 bw ( KiB/s): min=77668, max=234496, per=11.40%, avg=134842.35, stdev=40126.43, samples=20 00:25:52.684 iops : min= 303, max= 916, avg=526.70, stdev=156.78, samples=20 00:25:52.684 lat (msec) : 4=0.04%, 10=0.60%, 20=1.78%, 50=5.72%, 100=24.63% 00:25:52.684 lat (msec) : 250=65.30%, 500=1.93% 00:25:52.684 cpu : usr=1.70%, sys=1.65%, ctx=2072, majf=0, minf=1 00:25:52.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:52.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.684 issued rwts: total=0,5331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.684 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.684 00:25:52.684 Run status group 0 (all jobs): 00:25:52.684 WRITE: bw=1155MiB/s (1211MB/s), 73.2MiB/s-137MiB/s (76.8MB/s-144MB/s), io=11.5GiB (12.4GB), run=10080-10238msec 00:25:52.684 00:25:52.684 Disk stats (read/write): 00:25:52.684 nvme0n1: ios=52/9359, merge=0/0, ticks=800/1213383, in_queue=1214183, util=99.06% 00:25:52.684 nvme10n1: ios=50/9595, merge=0/0, ticks=2976/1197609, in_queue=1200585, util=99.45% 00:25:52.684 nvme1n1: ios=53/9947, merge=0/0, ticks=4935/1219934, in_queue=1224869, util=99.56% 00:25:52.684 nvme2n1: ios=52/6267, merge=0/0, ticks=600/1235590, in_queue=1236190, util=99.72% 00:25:52.684 nvme3n1: ios=49/6507, merge=0/0, ticks=47/1224851, in_queue=1224898, util=98.05% 00:25:52.684 nvme4n1: ios=46/5950, merge=0/0, ticks=49/1230077, in_queue=1230126, util=98.38% 00:25:52.684 nvme5n1: ios=44/7838, merge=0/0, ticks=1281/1218458, in_queue=1219739, util=100.00% 00:25:52.684 nvme6n1: ios=51/10932, merge=0/0, ticks=620/1217307, in_queue=1217927, util=100.00% 00:25:52.684 nvme7n1: ios=0/6609, merge=0/0, ticks=0/1242368, in_queue=1242368, util=98.83% 00:25:52.684 nvme8n1: ios=42/9613, merge=0/0, ticks=655/1218519, in_queue=1219174, util=100.00% 00:25:52.684 nvme9n1: ios=48/10471, merge=0/0, ticks=3265/1183698, in_queue=1186963, util=100.00% 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:52.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:52.684 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:52.684 02:12:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:52.685 02:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:52.685 02:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:52.685 02:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:52.685 02:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:52.685 02:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:52.685 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.685 02:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:52.944 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:52.944 02:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:52.944 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:52.944 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:52.944 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:52.944 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:52.944 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:52.944 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:52.944 02:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:52.944 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.944 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.944 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.944 02:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.944 02:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:53.202 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:53.202 02:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:53.202 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:53.202 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:53.202 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:53.202 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:53.202 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:53.202 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:53.202 02:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:53.202 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.202 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.202 02:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.202 02:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.202 02:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:53.462 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:53.462 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:53.462 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:53.462 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:53.462 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:53.462 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:53.462 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:53.462 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:53.462 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:53.462 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.462 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.462 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.462 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.462 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:53.720 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:53.720 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:53.720 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:53.720 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:53.720 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:53.720 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:53.720 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:53.720 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:53.720 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:53.720 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.720 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.720 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.720 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.720 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:53.978 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:53.978 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.978 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:54.238 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:54.238 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:54.238 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:54.238 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:54.239 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:54.239 rmmod nvme_tcp 00:25:54.239 rmmod nvme_fabrics 00:25:54.239 rmmod nvme_keyring 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1644078 ']' 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1644078 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1644078 ']' 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1644078 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1644078 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1644078' 00:25:54.239 killing process with pid 1644078 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1644078 00:25:54.239 02:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1644078 00:25:54.807 02:13:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:54.807 02:13:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:54.807 02:13:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:54.807 02:13:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:54.807 02:13:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:54.807 02:13:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.807 02:13:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:54.807 02:13:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.348 02:13:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:57.348 00:25:57.348 real 1m0.718s 00:25:57.348 user 3m20.477s 00:25:57.348 sys 0m23.108s 00:25:57.348 02:13:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:57.348 02:13:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.348 ************************************ 00:25:57.348 END TEST nvmf_multiconnection 00:25:57.348 ************************************ 00:25:57.348 02:13:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:57.348 02:13:02 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:57.348 02:13:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:57.348 02:13:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.348 02:13:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:57.348 ************************************ 00:25:57.348 START TEST nvmf_initiator_timeout 00:25:57.348 ************************************ 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:57.348 * Looking for test storage... 00:25:57.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:57.348 02:13:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:59.252 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:59.252 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:59.252 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:59.252 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:59.253 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:59.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:25:59.253 00:25:59.253 --- 10.0.0.2 ping statistics --- 00:25:59.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.253 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:25:59.253 00:25:59.253 --- 10.0.0.1 ping statistics --- 00:25:59.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.253 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1653225 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1653225 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1653225 ']' 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:59.253 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.253 [2024-07-14 02:13:04.719435] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:25:59.253 [2024-07-14 02:13:04.719521] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.253 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.253 [2024-07-14 02:13:04.786990] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:59.253 [2024-07-14 02:13:04.874073] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.253 [2024-07-14 02:13:04.874137] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.253 [2024-07-14 02:13:04.874152] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.253 [2024-07-14 02:13:04.874163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.253 [2024-07-14 02:13:04.874173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.253 [2024-07-14 02:13:04.874234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.253 [2024-07-14 02:13:04.874294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:59.253 [2024-07-14 02:13:04.874361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:59.253 [2024-07-14 02:13:04.874364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.511 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:59.511 02:13:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.511 Malloc0 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.511 Delay0 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.511 [2024-07-14 02:13:05.060066] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.511 [2024-07-14 02:13:05.088320] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.511 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:00.448 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:00.448 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:00.448 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:00.448 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:00.448 02:13:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:02.349 02:13:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:02.349 02:13:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:02.349 02:13:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:02.349 02:13:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:02.349 02:13:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:02.349 02:13:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:02.349 02:13:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1653652 00:26:02.349 02:13:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:02.349 02:13:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:02.349 [global] 00:26:02.349 thread=1 00:26:02.349 invalidate=1 00:26:02.349 rw=write 00:26:02.349 time_based=1 00:26:02.349 runtime=60 00:26:02.349 ioengine=libaio 00:26:02.349 direct=1 00:26:02.349 bs=4096 00:26:02.349 iodepth=1 00:26:02.349 norandommap=0 00:26:02.349 numjobs=1 00:26:02.349 00:26:02.349 verify_dump=1 00:26:02.349 verify_backlog=512 00:26:02.349 verify_state_save=0 00:26:02.349 do_verify=1 00:26:02.349 verify=crc32c-intel 00:26:02.349 [job0] 00:26:02.349 filename=/dev/nvme0n1 00:26:02.349 Could not set queue depth (nvme0n1) 00:26:02.349 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:02.349 fio-3.35 00:26:02.349 Starting 1 thread 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:05.632 true 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:05.632 true 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:05.632 true 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:05.632 true 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.632 02:13:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.179 true 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.179 true 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.179 true 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.179 true 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:08.179 02:13:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1653652 00:27:04.454 00:27:04.454 job0: (groupid=0, jobs=1): err= 0: pid=1653721: Sun Jul 14 02:14:08 2024 00:27:04.454 read: IOPS=41, BW=166KiB/s (170kB/s)(9976KiB/60022msec) 00:27:04.454 slat (nsec): min=5781, max=67274, avg=18127.96, stdev=9804.13 00:27:04.454 clat (usec): min=355, max=41237k, avg=23664.05, stdev=825723.52 00:27:04.454 lat (usec): min=363, max=41237k, avg=23682.18, stdev=825723.57 00:27:04.454 clat percentiles (usec): 00:27:04.454 | 1.00th=[ 371], 5.00th=[ 383], 10.00th=[ 396], 00:27:04.454 | 20.00th=[ 412], 30.00th=[ 429], 40.00th=[ 445], 00:27:04.454 | 50.00th=[ 474], 60.00th=[ 498], 70.00th=[ 510], 00:27:04.454 | 80.00th=[ 545], 90.00th=[ 41157], 95.00th=[ 41157], 00:27:04.454 | 99.00th=[ 41157], 99.50th=[ 42206], 99.90th=[ 44827], 00:27:04.454 | 99.95th=[ 44827], 99.99th=[17112761] 00:27:04.454 write: IOPS=42, BW=171KiB/s (175kB/s)(10.0MiB/60022msec); 0 zone resets 00:27:04.454 slat (usec): min=6, max=29628, avg=30.65, stdev=585.35 00:27:04.454 clat (usec): min=230, max=2692, avg=333.16, stdev=83.34 00:27:04.454 lat (usec): min=237, max=30013, avg=363.81, stdev=593.26 00:27:04.454 clat percentiles (usec): 00:27:04.454 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 255], 00:27:04.454 | 30.00th=[ 273], 40.00th=[ 306], 50.00th=[ 330], 60.00th=[ 355], 00:27:04.454 | 70.00th=[ 379], 80.00th=[ 400], 90.00th=[ 424], 95.00th=[ 449], 00:27:04.454 | 99.00th=[ 486], 99.50th=[ 490], 99.90th=[ 502], 99.95th=[ 515], 00:27:04.454 | 99.99th=[ 2704] 00:27:04.454 bw ( KiB/s): min= 4792, max= 5432, per=100.00%, avg=5120.00, stdev=263.47, samples=4 00:27:04.454 iops : min= 1198, max= 1358, avg=1280.00, stdev=65.87, samples=4 00:27:04.454 lat (usec) : 250=6.79%, 500=74.44%, 750=10.59%, 1000=0.02% 00:27:04.454 lat (msec) : 2=0.02%, 4=0.02%, 50=8.11%, >=2000=0.02% 00:27:04.454 cpu : usr=0.13%, sys=0.18%, ctx=5057, majf=0, minf=2 00:27:04.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:04.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.454 issued rwts: total=2494,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:04.454 00:27:04.454 Run status group 0 (all jobs): 00:27:04.454 READ: bw=166KiB/s (170kB/s), 166KiB/s-166KiB/s (170kB/s-170kB/s), io=9976KiB (10.2MB), run=60022-60022msec 00:27:04.454 WRITE: bw=171KiB/s (175kB/s), 171KiB/s-171KiB/s (175kB/s-175kB/s), io=10.0MiB (10.5MB), run=60022-60022msec 00:27:04.454 00:27:04.454 Disk stats (read/write): 00:27:04.454 nvme0n1: ios=2543/2560, merge=0/0, ticks=18943/766, in_queue=19709, util=99.92% 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:04.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:04.454 nvmf hotplug test: fio successful as expected 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:04.454 rmmod nvme_tcp 00:27:04.454 rmmod nvme_fabrics 00:27:04.454 rmmod nvme_keyring 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1653225 ']' 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1653225 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1653225 ']' 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1653225 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1653225 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1653225' 00:27:04.454 killing process with pid 1653225 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1653225 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1653225 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.454 02:14:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.024 02:14:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:05.024 00:27:05.024 real 1m8.144s 00:27:05.024 user 4m10.275s 00:27:05.024 sys 0m6.744s 00:27:05.024 02:14:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:05.024 02:14:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.024 ************************************ 00:27:05.024 END TEST nvmf_initiator_timeout 00:27:05.024 ************************************ 00:27:05.024 02:14:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:05.024 02:14:10 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:27:05.024 02:14:10 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:27:05.024 02:14:10 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:27:05.024 02:14:10 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:27:05.024 02:14:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:06.930 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:06.930 02:14:12 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.931 02:14:12 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.931 02:14:12 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.931 02:14:12 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.931 02:14:12 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.931 02:14:12 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.931 02:14:12 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:06.931 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:06.931 02:14:12 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.931 02:14:12 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.931 02:14:12 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.931 02:14:12 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.931 02:14:12 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.931 02:14:12 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:06.931 02:14:12 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:06.931 02:14:12 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:06.931 02:14:12 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:07.190 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:07.190 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:27:07.190 02:14:12 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:07.190 02:14:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:07.190 02:14:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:07.190 02:14:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.190 ************************************ 00:27:07.190 START TEST nvmf_perf_adq 00:27:07.190 ************************************ 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:07.190 * Looking for test storage... 00:27:07.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.190 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:07.191 02:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:09.098 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:09.098 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.098 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:09.099 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:09.099 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:09.099 02:14:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:09.667 02:14:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:11.573 02:14:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:16.847 02:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:16.847 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:16.847 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.847 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:16.847 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:16.847 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:16.847 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.847 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.847 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.847 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:16.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:16.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:16.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:16.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:16.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:27:16.848 00:27:16.848 --- 10.0.0.2 ping statistics --- 00:27:16.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.848 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:27:16.848 00:27:16.848 --- 10.0.0.1 ping statistics --- 00:27:16.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.848 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1665248 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1665248 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1665248 ']' 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:16.848 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.848 [2024-07-14 02:14:22.442872] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:16.848 [2024-07-14 02:14:22.442949] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.848 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.849 [2024-07-14 02:14:22.507060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:17.107 [2024-07-14 02:14:22.593687] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.107 [2024-07-14 02:14:22.593738] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.107 [2024-07-14 02:14:22.593766] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.107 [2024-07-14 02:14:22.593777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.107 [2024-07-14 02:14:22.593786] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.107 [2024-07-14 02:14:22.593872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.107 [2024-07-14 02:14:22.593933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.107 [2024-07-14 02:14:22.593983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.107 [2024-07-14 02:14:22.593986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.107 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.399 [2024-07-14 02:14:22.833754] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.399 Malloc1 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.399 [2024-07-14 02:14:22.885762] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1665364 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:17.399 02:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:17.399 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.303 02:14:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:19.303 02:14:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.303 02:14:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.303 02:14:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.303 02:14:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:19.303 "tick_rate": 2700000000, 00:27:19.303 "poll_groups": [ 00:27:19.303 { 00:27:19.303 "name": "nvmf_tgt_poll_group_000", 00:27:19.303 "admin_qpairs": 1, 00:27:19.303 "io_qpairs": 1, 00:27:19.303 "current_admin_qpairs": 1, 00:27:19.304 "current_io_qpairs": 1, 00:27:19.304 "pending_bdev_io": 0, 00:27:19.304 "completed_nvme_io": 19881, 00:27:19.304 "transports": [ 00:27:19.304 { 00:27:19.304 "trtype": "TCP" 00:27:19.304 } 00:27:19.304 ] 00:27:19.304 }, 00:27:19.304 { 00:27:19.304 "name": "nvmf_tgt_poll_group_001", 00:27:19.304 "admin_qpairs": 0, 00:27:19.304 "io_qpairs": 1, 00:27:19.304 "current_admin_qpairs": 0, 00:27:19.304 "current_io_qpairs": 1, 00:27:19.304 "pending_bdev_io": 0, 00:27:19.304 "completed_nvme_io": 12629, 00:27:19.304 "transports": [ 00:27:19.304 { 00:27:19.304 "trtype": "TCP" 00:27:19.304 } 00:27:19.304 ] 00:27:19.304 }, 00:27:19.304 { 00:27:19.304 "name": "nvmf_tgt_poll_group_002", 00:27:19.304 "admin_qpairs": 0, 00:27:19.304 "io_qpairs": 1, 00:27:19.304 "current_admin_qpairs": 0, 00:27:19.304 "current_io_qpairs": 1, 00:27:19.304 "pending_bdev_io": 0, 00:27:19.304 "completed_nvme_io": 19731, 00:27:19.304 "transports": [ 00:27:19.304 { 00:27:19.304 "trtype": "TCP" 00:27:19.304 } 00:27:19.304 ] 00:27:19.304 }, 00:27:19.304 { 00:27:19.304 "name": "nvmf_tgt_poll_group_003", 00:27:19.304 "admin_qpairs": 0, 00:27:19.304 "io_qpairs": 1, 00:27:19.304 "current_admin_qpairs": 0, 00:27:19.304 "current_io_qpairs": 1, 00:27:19.304 "pending_bdev_io": 0, 00:27:19.304 "completed_nvme_io": 19303, 00:27:19.304 "transports": [ 00:27:19.304 { 00:27:19.304 "trtype": "TCP" 00:27:19.304 } 00:27:19.304 ] 00:27:19.304 } 00:27:19.304 ] 00:27:19.304 }' 00:27:19.304 02:14:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:19.304 02:14:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:19.304 02:14:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:19.304 02:14:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:19.304 02:14:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1665364 00:27:29.281 Initializing NVMe Controllers 00:27:29.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:29.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:29.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:29.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:29.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:29.281 Initialization complete. Launching workers. 00:27:29.281 ======================================================== 00:27:29.281 Latency(us) 00:27:29.281 Device Information : IOPS MiB/s Average min max 00:27:29.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10906.30 42.60 5869.02 1752.45 9064.14 00:27:29.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7094.17 27.71 9024.80 4124.62 13157.78 00:27:29.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11079.79 43.28 5777.24 2782.60 8597.76 00:27:29.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11259.99 43.98 5684.27 1562.46 9356.18 00:27:29.281 ======================================================== 00:27:29.281 Total : 40340.25 157.58 6347.21 1562.46 13157.78 00:27:29.281 00:27:29.281 02:14:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:29.281 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:29.281 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:29.281 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:29.281 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:29.281 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:29.281 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:29.281 rmmod nvme_tcp 00:27:29.281 rmmod nvme_fabrics 00:27:29.281 rmmod nvme_keyring 00:27:29.281 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:29.281 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:29.281 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:29.281 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1665248 ']' 00:27:29.281 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1665248 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1665248 ']' 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1665248 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1665248 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1665248' 00:27:29.282 killing process with pid 1665248 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1665248 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1665248 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.282 02:14:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.219 02:14:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:30.219 02:14:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:30.219 02:14:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:30.785 02:14:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:32.694 02:14:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:37.971 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:37.971 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:37.971 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:37.972 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:37.972 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:37.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:27:37.972 00:27:37.972 --- 10.0.0.2 ping statistics --- 00:27:37.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.972 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:27:37.972 00:27:37.972 --- 10.0.0.1 ping statistics --- 00:27:37.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.972 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:37.972 net.core.busy_poll = 1 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:37.972 net.core.busy_read = 1 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1668001 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1668001 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1668001 ']' 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:37.972 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.972 [2024-07-14 02:14:43.463815] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:37.972 [2024-07-14 02:14:43.463913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.972 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.972 [2024-07-14 02:14:43.534059] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:37.972 [2024-07-14 02:14:43.630284] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.972 [2024-07-14 02:14:43.630345] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.972 [2024-07-14 02:14:43.630362] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.972 [2024-07-14 02:14:43.630376] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.972 [2024-07-14 02:14:43.630388] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.972 [2024-07-14 02:14:43.630470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.972 [2024-07-14 02:14:43.630524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.972 [2024-07-14 02:14:43.630646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.972 [2024-07-14 02:14:43.630648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.230 [2024-07-14 02:14:43.834452] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.230 Malloc1 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.230 [2024-07-14 02:14:43.885034] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1668036 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:38.230 02:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:38.230 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.776 02:14:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:40.776 02:14:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.776 02:14:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:40.776 02:14:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.776 02:14:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:40.776 "tick_rate": 2700000000, 00:27:40.776 "poll_groups": [ 00:27:40.776 { 00:27:40.776 "name": "nvmf_tgt_poll_group_000", 00:27:40.776 "admin_qpairs": 1, 00:27:40.776 "io_qpairs": 1, 00:27:40.776 "current_admin_qpairs": 1, 00:27:40.776 "current_io_qpairs": 1, 00:27:40.776 "pending_bdev_io": 0, 00:27:40.776 "completed_nvme_io": 20737, 00:27:40.776 "transports": [ 00:27:40.776 { 00:27:40.776 "trtype": "TCP" 00:27:40.776 } 00:27:40.776 ] 00:27:40.776 }, 00:27:40.776 { 00:27:40.776 "name": "nvmf_tgt_poll_group_001", 00:27:40.776 "admin_qpairs": 0, 00:27:40.776 "io_qpairs": 3, 00:27:40.777 "current_admin_qpairs": 0, 00:27:40.777 "current_io_qpairs": 3, 00:27:40.777 "pending_bdev_io": 0, 00:27:40.777 "completed_nvme_io": 28364, 00:27:40.777 "transports": [ 00:27:40.777 { 00:27:40.777 "trtype": "TCP" 00:27:40.777 } 00:27:40.777 ] 00:27:40.777 }, 00:27:40.777 { 00:27:40.777 "name": "nvmf_tgt_poll_group_002", 00:27:40.777 "admin_qpairs": 0, 00:27:40.777 "io_qpairs": 0, 00:27:40.777 "current_admin_qpairs": 0, 00:27:40.777 "current_io_qpairs": 0, 00:27:40.777 "pending_bdev_io": 0, 00:27:40.777 "completed_nvme_io": 0, 00:27:40.777 "transports": [ 00:27:40.777 { 00:27:40.777 "trtype": "TCP" 00:27:40.777 } 00:27:40.777 ] 00:27:40.777 }, 00:27:40.777 { 00:27:40.777 "name": "nvmf_tgt_poll_group_003", 00:27:40.777 "admin_qpairs": 0, 00:27:40.777 "io_qpairs": 0, 00:27:40.777 "current_admin_qpairs": 0, 00:27:40.777 "current_io_qpairs": 0, 00:27:40.777 "pending_bdev_io": 0, 00:27:40.777 "completed_nvme_io": 0, 00:27:40.777 "transports": [ 00:27:40.777 { 00:27:40.777 "trtype": "TCP" 00:27:40.777 } 00:27:40.777 ] 00:27:40.777 } 00:27:40.777 ] 00:27:40.777 }' 00:27:40.777 02:14:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:40.777 02:14:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:40.777 02:14:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:40.777 02:14:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:40.777 02:14:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1668036 00:27:48.927 Initializing NVMe Controllers 00:27:48.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:48.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:48.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:48.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:48.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:48.927 Initialization complete. Launching workers. 00:27:48.927 ======================================================== 00:27:48.927 Latency(us) 00:27:48.927 Device Information : IOPS MiB/s Average min max 00:27:48.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10781.90 42.12 5937.19 2643.99 8128.40 00:27:48.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4719.00 18.43 13571.43 3606.49 59774.85 00:27:48.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5198.90 20.31 12317.13 1849.37 60876.15 00:27:48.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5196.40 20.30 12318.03 2006.71 58846.15 00:27:48.927 ======================================================== 00:27:48.928 Total : 25896.19 101.16 9889.59 1849.37 60876.15 00:27:48.928 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:48.928 rmmod nvme_tcp 00:27:48.928 rmmod nvme_fabrics 00:27:48.928 rmmod nvme_keyring 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1668001 ']' 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1668001 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1668001 ']' 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1668001 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1668001 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1668001' 00:27:48.928 killing process with pid 1668001 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1668001 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1668001 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:48.928 02:14:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.218 02:14:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:52.218 02:14:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:52.218 00:27:52.218 real 0m44.784s 00:27:52.218 user 2m29.056s 00:27:52.218 sys 0m13.242s 00:27:52.218 02:14:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:52.218 02:14:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.218 ************************************ 00:27:52.218 END TEST nvmf_perf_adq 00:27:52.218 ************************************ 00:27:52.218 02:14:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:52.218 02:14:57 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:52.218 02:14:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:52.218 02:14:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.218 02:14:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:52.218 ************************************ 00:27:52.218 START TEST nvmf_shutdown 00:27:52.218 ************************************ 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:52.218 * Looking for test storage... 00:27:52.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:52.218 ************************************ 00:27:52.218 START TEST nvmf_shutdown_tc1 00:27:52.218 ************************************ 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:52.218 02:14:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:54.122 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:54.122 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.122 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:54.122 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:54.123 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:54.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:27:54.123 00:27:54.123 --- 10.0.0.2 ping statistics --- 00:27:54.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.123 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:27:54.123 00:27:54.123 --- 10.0.0.1 ping statistics --- 00:27:54.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.123 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1671320 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1671320 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1671320 ']' 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:54.123 02:14:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.123 [2024-07-14 02:14:59.734485] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:54.123 [2024-07-14 02:14:59.734557] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.123 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.123 [2024-07-14 02:14:59.798907] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:54.381 [2024-07-14 02:14:59.885289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.381 [2024-07-14 02:14:59.885340] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.381 [2024-07-14 02:14:59.885364] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.381 [2024-07-14 02:14:59.885375] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.381 [2024-07-14 02:14:59.885399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.381 [2024-07-14 02:14:59.885493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:54.381 [2024-07-14 02:14:59.885557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:54.381 [2024-07-14 02:14:59.885598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:54.381 [2024-07-14 02:14:59.885600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.381 [2024-07-14 02:15:00.051802] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.381 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.640 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.640 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.640 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.640 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.640 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.640 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.640 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.640 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.640 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.640 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.640 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.640 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.640 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:54.640 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.640 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.640 Malloc1 00:27:54.640 [2024-07-14 02:15:00.132941] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.640 Malloc2 00:27:54.640 Malloc3 00:27:54.640 Malloc4 00:27:54.640 Malloc5 00:27:54.899 Malloc6 00:27:54.899 Malloc7 00:27:54.899 Malloc8 00:27:54.899 Malloc9 00:27:54.899 Malloc10 00:27:54.899 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.899 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:54.899 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:54.899 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1671536 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1671536 /var/tmp/bdevperf.sock 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1671536 ']' 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:55.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.159 { 00:27:55.159 "params": { 00:27:55.159 "name": "Nvme$subsystem", 00:27:55.159 "trtype": "$TEST_TRANSPORT", 00:27:55.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.159 "adrfam": "ipv4", 00:27:55.159 "trsvcid": "$NVMF_PORT", 00:27:55.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.159 "hdgst": ${hdgst:-false}, 00:27:55.159 "ddgst": ${ddgst:-false} 00:27:55.159 }, 00:27:55.159 "method": "bdev_nvme_attach_controller" 00:27:55.159 } 00:27:55.159 EOF 00:27:55.159 )") 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.159 { 00:27:55.159 "params": { 00:27:55.159 "name": "Nvme$subsystem", 00:27:55.159 "trtype": "$TEST_TRANSPORT", 00:27:55.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.159 "adrfam": "ipv4", 00:27:55.159 "trsvcid": "$NVMF_PORT", 00:27:55.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.159 "hdgst": ${hdgst:-false}, 00:27:55.159 "ddgst": ${ddgst:-false} 00:27:55.159 }, 00:27:55.159 "method": "bdev_nvme_attach_controller" 00:27:55.159 } 00:27:55.159 EOF 00:27:55.159 )") 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.159 { 00:27:55.159 "params": { 00:27:55.159 "name": "Nvme$subsystem", 00:27:55.159 "trtype": "$TEST_TRANSPORT", 00:27:55.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.159 "adrfam": "ipv4", 00:27:55.159 "trsvcid": "$NVMF_PORT", 00:27:55.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.159 "hdgst": ${hdgst:-false}, 00:27:55.159 "ddgst": ${ddgst:-false} 00:27:55.159 }, 00:27:55.159 "method": "bdev_nvme_attach_controller" 00:27:55.159 } 00:27:55.159 EOF 00:27:55.159 )") 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.159 { 00:27:55.159 "params": { 00:27:55.159 "name": "Nvme$subsystem", 00:27:55.159 "trtype": "$TEST_TRANSPORT", 00:27:55.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.159 "adrfam": "ipv4", 00:27:55.159 "trsvcid": "$NVMF_PORT", 00:27:55.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.159 "hdgst": ${hdgst:-false}, 00:27:55.159 "ddgst": ${ddgst:-false} 00:27:55.159 }, 00:27:55.159 "method": "bdev_nvme_attach_controller" 00:27:55.159 } 00:27:55.159 EOF 00:27:55.159 )") 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.159 { 00:27:55.159 "params": { 00:27:55.159 "name": "Nvme$subsystem", 00:27:55.159 "trtype": "$TEST_TRANSPORT", 00:27:55.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.159 "adrfam": "ipv4", 00:27:55.159 "trsvcid": "$NVMF_PORT", 00:27:55.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.159 "hdgst": ${hdgst:-false}, 00:27:55.159 "ddgst": ${ddgst:-false} 00:27:55.159 }, 00:27:55.159 "method": "bdev_nvme_attach_controller" 00:27:55.159 } 00:27:55.159 EOF 00:27:55.159 )") 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.159 { 00:27:55.159 "params": { 00:27:55.159 "name": "Nvme$subsystem", 00:27:55.159 "trtype": "$TEST_TRANSPORT", 00:27:55.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.159 "adrfam": "ipv4", 00:27:55.159 "trsvcid": "$NVMF_PORT", 00:27:55.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.159 "hdgst": ${hdgst:-false}, 00:27:55.159 "ddgst": ${ddgst:-false} 00:27:55.159 }, 00:27:55.159 "method": "bdev_nvme_attach_controller" 00:27:55.159 } 00:27:55.159 EOF 00:27:55.159 )") 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.159 { 00:27:55.159 "params": { 00:27:55.159 "name": "Nvme$subsystem", 00:27:55.159 "trtype": "$TEST_TRANSPORT", 00:27:55.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.159 "adrfam": "ipv4", 00:27:55.159 "trsvcid": "$NVMF_PORT", 00:27:55.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.159 "hdgst": ${hdgst:-false}, 00:27:55.159 "ddgst": ${ddgst:-false} 00:27:55.159 }, 00:27:55.159 "method": "bdev_nvme_attach_controller" 00:27:55.159 } 00:27:55.159 EOF 00:27:55.159 )") 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.159 { 00:27:55.159 "params": { 00:27:55.159 "name": "Nvme$subsystem", 00:27:55.159 "trtype": "$TEST_TRANSPORT", 00:27:55.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.159 "adrfam": "ipv4", 00:27:55.159 "trsvcid": "$NVMF_PORT", 00:27:55.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.159 "hdgst": ${hdgst:-false}, 00:27:55.159 "ddgst": ${ddgst:-false} 00:27:55.159 }, 00:27:55.159 "method": "bdev_nvme_attach_controller" 00:27:55.159 } 00:27:55.159 EOF 00:27:55.159 )") 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.159 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.159 { 00:27:55.159 "params": { 00:27:55.159 "name": "Nvme$subsystem", 00:27:55.159 "trtype": "$TEST_TRANSPORT", 00:27:55.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.159 "adrfam": "ipv4", 00:27:55.159 "trsvcid": "$NVMF_PORT", 00:27:55.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.159 "hdgst": ${hdgst:-false}, 00:27:55.160 "ddgst": ${ddgst:-false} 00:27:55.160 }, 00:27:55.160 "method": "bdev_nvme_attach_controller" 00:27:55.160 } 00:27:55.160 EOF 00:27:55.160 )") 00:27:55.160 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.160 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.160 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.160 { 00:27:55.160 "params": { 00:27:55.160 "name": "Nvme$subsystem", 00:27:55.160 "trtype": "$TEST_TRANSPORT", 00:27:55.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.160 "adrfam": "ipv4", 00:27:55.160 "trsvcid": "$NVMF_PORT", 00:27:55.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.160 "hdgst": ${hdgst:-false}, 00:27:55.160 "ddgst": ${ddgst:-false} 00:27:55.160 }, 00:27:55.160 "method": "bdev_nvme_attach_controller" 00:27:55.160 } 00:27:55.160 EOF 00:27:55.160 )") 00:27:55.160 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.160 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:55.160 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:55.160 02:15:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:55.160 "params": { 00:27:55.160 "name": "Nvme1", 00:27:55.160 "trtype": "tcp", 00:27:55.160 "traddr": "10.0.0.2", 00:27:55.160 "adrfam": "ipv4", 00:27:55.160 "trsvcid": "4420", 00:27:55.160 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.160 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:55.160 "hdgst": false, 00:27:55.160 "ddgst": false 00:27:55.160 }, 00:27:55.160 "method": "bdev_nvme_attach_controller" 00:27:55.160 },{ 00:27:55.160 "params": { 00:27:55.160 "name": "Nvme2", 00:27:55.160 "trtype": "tcp", 00:27:55.160 "traddr": "10.0.0.2", 00:27:55.160 "adrfam": "ipv4", 00:27:55.160 "trsvcid": "4420", 00:27:55.160 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:55.160 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:55.160 "hdgst": false, 00:27:55.160 "ddgst": false 00:27:55.160 }, 00:27:55.160 "method": "bdev_nvme_attach_controller" 00:27:55.160 },{ 00:27:55.160 "params": { 00:27:55.160 "name": "Nvme3", 00:27:55.160 "trtype": "tcp", 00:27:55.160 "traddr": "10.0.0.2", 00:27:55.160 "adrfam": "ipv4", 00:27:55.160 "trsvcid": "4420", 00:27:55.160 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:55.160 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:55.160 "hdgst": false, 00:27:55.160 "ddgst": false 00:27:55.160 }, 00:27:55.160 "method": "bdev_nvme_attach_controller" 00:27:55.160 },{ 00:27:55.160 "params": { 00:27:55.160 "name": "Nvme4", 00:27:55.160 "trtype": "tcp", 00:27:55.160 "traddr": "10.0.0.2", 00:27:55.160 "adrfam": "ipv4", 00:27:55.160 "trsvcid": "4420", 00:27:55.160 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:55.160 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:55.160 "hdgst": false, 00:27:55.160 "ddgst": false 00:27:55.160 }, 00:27:55.160 "method": "bdev_nvme_attach_controller" 00:27:55.160 },{ 00:27:55.160 "params": { 00:27:55.160 "name": "Nvme5", 00:27:55.160 "trtype": "tcp", 00:27:55.160 "traddr": "10.0.0.2", 00:27:55.160 "adrfam": "ipv4", 00:27:55.160 "trsvcid": "4420", 00:27:55.160 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:55.160 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:55.160 "hdgst": false, 00:27:55.160 "ddgst": false 00:27:55.160 }, 00:27:55.160 "method": "bdev_nvme_attach_controller" 00:27:55.160 },{ 00:27:55.160 "params": { 00:27:55.160 "name": "Nvme6", 00:27:55.160 "trtype": "tcp", 00:27:55.160 "traddr": "10.0.0.2", 00:27:55.160 "adrfam": "ipv4", 00:27:55.160 "trsvcid": "4420", 00:27:55.160 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:55.160 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:55.160 "hdgst": false, 00:27:55.160 "ddgst": false 00:27:55.160 }, 00:27:55.160 "method": "bdev_nvme_attach_controller" 00:27:55.160 },{ 00:27:55.160 "params": { 00:27:55.160 "name": "Nvme7", 00:27:55.160 "trtype": "tcp", 00:27:55.160 "traddr": "10.0.0.2", 00:27:55.160 "adrfam": "ipv4", 00:27:55.160 "trsvcid": "4420", 00:27:55.160 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:55.160 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:55.160 "hdgst": false, 00:27:55.160 "ddgst": false 00:27:55.160 }, 00:27:55.160 "method": "bdev_nvme_attach_controller" 00:27:55.160 },{ 00:27:55.160 "params": { 00:27:55.160 "name": "Nvme8", 00:27:55.160 "trtype": "tcp", 00:27:55.160 "traddr": "10.0.0.2", 00:27:55.160 "adrfam": "ipv4", 00:27:55.160 "trsvcid": "4420", 00:27:55.160 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:55.160 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:55.160 "hdgst": false, 00:27:55.160 "ddgst": false 00:27:55.160 }, 00:27:55.160 "method": "bdev_nvme_attach_controller" 00:27:55.160 },{ 00:27:55.160 "params": { 00:27:55.160 "name": "Nvme9", 00:27:55.160 "trtype": "tcp", 00:27:55.160 "traddr": "10.0.0.2", 00:27:55.160 "adrfam": "ipv4", 00:27:55.160 "trsvcid": "4420", 00:27:55.160 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:55.160 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:55.160 "hdgst": false, 00:27:55.160 "ddgst": false 00:27:55.160 }, 00:27:55.160 "method": "bdev_nvme_attach_controller" 00:27:55.160 },{ 00:27:55.160 "params": { 00:27:55.160 "name": "Nvme10", 00:27:55.160 "trtype": "tcp", 00:27:55.160 "traddr": "10.0.0.2", 00:27:55.160 "adrfam": "ipv4", 00:27:55.160 "trsvcid": "4420", 00:27:55.160 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:55.160 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:55.160 "hdgst": false, 00:27:55.160 "ddgst": false 00:27:55.160 }, 00:27:55.160 "method": "bdev_nvme_attach_controller" 00:27:55.160 }' 00:27:55.160 [2024-07-14 02:15:00.639219] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:55.160 [2024-07-14 02:15:00.639310] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:55.160 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.160 [2024-07-14 02:15:00.704934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.160 [2024-07-14 02:15:00.792477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.060 02:15:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:57.060 02:15:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:57.060 02:15:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:57.060 02:15:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.060 02:15:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:57.060 02:15:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.060 02:15:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1671536 00:27:57.060 02:15:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:57.060 02:15:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:57.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1671536 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1671320 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.993 { 00:27:57.993 "params": { 00:27:57.993 "name": "Nvme$subsystem", 00:27:57.993 "trtype": "$TEST_TRANSPORT", 00:27:57.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.993 "adrfam": "ipv4", 00:27:57.993 "trsvcid": "$NVMF_PORT", 00:27:57.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.993 "hdgst": ${hdgst:-false}, 00:27:57.993 "ddgst": ${ddgst:-false} 00:27:57.993 }, 00:27:57.993 "method": "bdev_nvme_attach_controller" 00:27:57.993 } 00:27:57.993 EOF 00:27:57.993 )") 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.993 { 00:27:57.993 "params": { 00:27:57.993 "name": "Nvme$subsystem", 00:27:57.993 "trtype": "$TEST_TRANSPORT", 00:27:57.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.993 "adrfam": "ipv4", 00:27:57.993 "trsvcid": "$NVMF_PORT", 00:27:57.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.993 "hdgst": ${hdgst:-false}, 00:27:57.993 "ddgst": ${ddgst:-false} 00:27:57.993 }, 00:27:57.993 "method": "bdev_nvme_attach_controller" 00:27:57.993 } 00:27:57.993 EOF 00:27:57.993 )") 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.993 { 00:27:57.993 "params": { 00:27:57.993 "name": "Nvme$subsystem", 00:27:57.993 "trtype": "$TEST_TRANSPORT", 00:27:57.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.993 "adrfam": "ipv4", 00:27:57.993 "trsvcid": "$NVMF_PORT", 00:27:57.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.993 "hdgst": ${hdgst:-false}, 00:27:57.993 "ddgst": ${ddgst:-false} 00:27:57.993 }, 00:27:57.993 "method": "bdev_nvme_attach_controller" 00:27:57.993 } 00:27:57.993 EOF 00:27:57.993 )") 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.993 { 00:27:57.993 "params": { 00:27:57.993 "name": "Nvme$subsystem", 00:27:57.993 "trtype": "$TEST_TRANSPORT", 00:27:57.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.993 "adrfam": "ipv4", 00:27:57.993 "trsvcid": "$NVMF_PORT", 00:27:57.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.993 "hdgst": ${hdgst:-false}, 00:27:57.993 "ddgst": ${ddgst:-false} 00:27:57.993 }, 00:27:57.993 "method": "bdev_nvme_attach_controller" 00:27:57.993 } 00:27:57.993 EOF 00:27:57.993 )") 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.993 { 00:27:57.993 "params": { 00:27:57.993 "name": "Nvme$subsystem", 00:27:57.993 "trtype": "$TEST_TRANSPORT", 00:27:57.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.993 "adrfam": "ipv4", 00:27:57.993 "trsvcid": "$NVMF_PORT", 00:27:57.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.993 "hdgst": ${hdgst:-false}, 00:27:57.993 "ddgst": ${ddgst:-false} 00:27:57.993 }, 00:27:57.993 "method": "bdev_nvme_attach_controller" 00:27:57.993 } 00:27:57.993 EOF 00:27:57.993 )") 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.993 { 00:27:57.993 "params": { 00:27:57.993 "name": "Nvme$subsystem", 00:27:57.993 "trtype": "$TEST_TRANSPORT", 00:27:57.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.993 "adrfam": "ipv4", 00:27:57.993 "trsvcid": "$NVMF_PORT", 00:27:57.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.993 "hdgst": ${hdgst:-false}, 00:27:57.993 "ddgst": ${ddgst:-false} 00:27:57.993 }, 00:27:57.993 "method": "bdev_nvme_attach_controller" 00:27:57.993 } 00:27:57.993 EOF 00:27:57.993 )") 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.993 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.993 { 00:27:57.993 "params": { 00:27:57.993 "name": "Nvme$subsystem", 00:27:57.993 "trtype": "$TEST_TRANSPORT", 00:27:57.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.993 "adrfam": "ipv4", 00:27:57.993 "trsvcid": "$NVMF_PORT", 00:27:57.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.993 "hdgst": ${hdgst:-false}, 00:27:57.993 "ddgst": ${ddgst:-false} 00:27:57.994 }, 00:27:57.994 "method": "bdev_nvme_attach_controller" 00:27:57.994 } 00:27:57.994 EOF 00:27:57.994 )") 00:27:57.994 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:57.994 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.994 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.994 { 00:27:57.994 "params": { 00:27:57.994 "name": "Nvme$subsystem", 00:27:57.994 "trtype": "$TEST_TRANSPORT", 00:27:57.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.994 "adrfam": "ipv4", 00:27:57.994 "trsvcid": "$NVMF_PORT", 00:27:57.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.994 "hdgst": ${hdgst:-false}, 00:27:57.994 "ddgst": ${ddgst:-false} 00:27:57.994 }, 00:27:57.994 "method": "bdev_nvme_attach_controller" 00:27:57.994 } 00:27:57.994 EOF 00:27:57.994 )") 00:27:57.994 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:57.994 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.994 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.994 { 00:27:57.994 "params": { 00:27:57.994 "name": "Nvme$subsystem", 00:27:57.994 "trtype": "$TEST_TRANSPORT", 00:27:57.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.994 "adrfam": "ipv4", 00:27:57.994 "trsvcid": "$NVMF_PORT", 00:27:57.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.994 "hdgst": ${hdgst:-false}, 00:27:57.994 "ddgst": ${ddgst:-false} 00:27:57.994 }, 00:27:57.994 "method": "bdev_nvme_attach_controller" 00:27:57.994 } 00:27:57.994 EOF 00:27:57.994 )") 00:27:57.994 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:57.994 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.994 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.994 { 00:27:57.994 "params": { 00:27:57.994 "name": "Nvme$subsystem", 00:27:57.994 "trtype": "$TEST_TRANSPORT", 00:27:57.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.994 "adrfam": "ipv4", 00:27:57.994 "trsvcid": "$NVMF_PORT", 00:27:57.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.994 "hdgst": ${hdgst:-false}, 00:27:57.994 "ddgst": ${ddgst:-false} 00:27:57.994 }, 00:27:57.994 "method": "bdev_nvme_attach_controller" 00:27:57.994 } 00:27:57.994 EOF 00:27:57.994 )") 00:27:57.994 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:57.994 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:57.994 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:57.994 02:15:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:57.994 "params": { 00:27:57.994 "name": "Nvme1", 00:27:57.994 "trtype": "tcp", 00:27:57.994 "traddr": "10.0.0.2", 00:27:57.994 "adrfam": "ipv4", 00:27:57.994 "trsvcid": "4420", 00:27:57.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:57.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:57.994 "hdgst": false, 00:27:57.994 "ddgst": false 00:27:57.994 }, 00:27:57.994 "method": "bdev_nvme_attach_controller" 00:27:57.994 },{ 00:27:57.994 "params": { 00:27:57.994 "name": "Nvme2", 00:27:57.994 "trtype": "tcp", 00:27:57.994 "traddr": "10.0.0.2", 00:27:57.994 "adrfam": "ipv4", 00:27:57.994 "trsvcid": "4420", 00:27:57.994 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:57.994 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:57.994 "hdgst": false, 00:27:57.994 "ddgst": false 00:27:57.994 }, 00:27:57.994 "method": "bdev_nvme_attach_controller" 00:27:57.994 },{ 00:27:57.994 "params": { 00:27:57.994 "name": "Nvme3", 00:27:57.994 "trtype": "tcp", 00:27:57.994 "traddr": "10.0.0.2", 00:27:57.994 "adrfam": "ipv4", 00:27:57.994 "trsvcid": "4420", 00:27:57.994 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:57.994 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:57.994 "hdgst": false, 00:27:57.994 "ddgst": false 00:27:57.994 }, 00:27:57.994 "method": "bdev_nvme_attach_controller" 00:27:57.994 },{ 00:27:57.994 "params": { 00:27:57.994 "name": "Nvme4", 00:27:57.994 "trtype": "tcp", 00:27:57.994 "traddr": "10.0.0.2", 00:27:57.994 "adrfam": "ipv4", 00:27:57.994 "trsvcid": "4420", 00:27:57.994 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:57.994 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:57.994 "hdgst": false, 00:27:57.994 "ddgst": false 00:27:57.994 }, 00:27:57.994 "method": "bdev_nvme_attach_controller" 00:27:57.994 },{ 00:27:57.994 "params": { 00:27:57.994 "name": "Nvme5", 00:27:57.994 "trtype": "tcp", 00:27:57.994 "traddr": "10.0.0.2", 00:27:57.994 "adrfam": "ipv4", 00:27:57.994 "trsvcid": "4420", 00:27:57.994 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:57.994 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:57.994 "hdgst": false, 00:27:57.994 "ddgst": false 00:27:57.994 }, 00:27:57.994 "method": "bdev_nvme_attach_controller" 00:27:57.994 },{ 00:27:57.994 "params": { 00:27:57.994 "name": "Nvme6", 00:27:57.994 "trtype": "tcp", 00:27:57.994 "traddr": "10.0.0.2", 00:27:57.994 "adrfam": "ipv4", 00:27:57.994 "trsvcid": "4420", 00:27:57.994 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:57.994 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:57.994 "hdgst": false, 00:27:57.994 "ddgst": false 00:27:57.994 }, 00:27:57.994 "method": "bdev_nvme_attach_controller" 00:27:57.994 },{ 00:27:57.994 "params": { 00:27:57.994 "name": "Nvme7", 00:27:57.994 "trtype": "tcp", 00:27:57.994 "traddr": "10.0.0.2", 00:27:57.994 "adrfam": "ipv4", 00:27:57.994 "trsvcid": "4420", 00:27:57.994 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:57.994 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:57.994 "hdgst": false, 00:27:57.994 "ddgst": false 00:27:57.994 }, 00:27:57.994 "method": "bdev_nvme_attach_controller" 00:27:57.994 },{ 00:27:57.994 "params": { 00:27:57.994 "name": "Nvme8", 00:27:57.994 "trtype": "tcp", 00:27:57.994 "traddr": "10.0.0.2", 00:27:57.994 "adrfam": "ipv4", 00:27:57.994 "trsvcid": "4420", 00:27:57.994 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:57.994 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:57.994 "hdgst": false, 00:27:57.994 "ddgst": false 00:27:57.994 }, 00:27:57.994 "method": "bdev_nvme_attach_controller" 00:27:57.994 },{ 00:27:57.994 "params": { 00:27:57.994 "name": "Nvme9", 00:27:57.994 "trtype": "tcp", 00:27:57.994 "traddr": "10.0.0.2", 00:27:57.994 "adrfam": "ipv4", 00:27:57.994 "trsvcid": "4420", 00:27:57.995 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:57.995 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:57.995 "hdgst": false, 00:27:57.995 "ddgst": false 00:27:57.995 }, 00:27:57.995 "method": "bdev_nvme_attach_controller" 00:27:57.995 },{ 00:27:57.995 "params": { 00:27:57.995 "name": "Nvme10", 00:27:57.995 "trtype": "tcp", 00:27:57.995 "traddr": "10.0.0.2", 00:27:57.995 "adrfam": "ipv4", 00:27:57.995 "trsvcid": "4420", 00:27:57.995 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:57.995 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:57.995 "hdgst": false, 00:27:57.995 "ddgst": false 00:27:57.995 }, 00:27:57.995 "method": "bdev_nvme_attach_controller" 00:27:57.995 }' 00:27:57.995 [2024-07-14 02:15:03.658411] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:57.995 [2024-07-14 02:15:03.658488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1671914 ] 00:27:58.253 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.253 [2024-07-14 02:15:03.725934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.253 [2024-07-14 02:15:03.815415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.630 Running I/O for 1 seconds... 00:28:01.011 00:28:01.011 Latency(us) 00:28:01.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.011 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.011 Verification LBA range: start 0x0 length 0x400 00:28:01.011 Nvme1n1 : 1.14 224.39 14.02 0.00 0.00 280649.58 20291.89 259425.47 00:28:01.011 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.011 Verification LBA range: start 0x0 length 0x400 00:28:01.011 Nvme2n1 : 1.19 215.53 13.47 0.00 0.00 289547.57 22330.79 267192.70 00:28:01.011 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.011 Verification LBA range: start 0x0 length 0x400 00:28:01.011 Nvme3n1 : 1.17 219.04 13.69 0.00 0.00 280229.93 18835.53 262532.36 00:28:01.011 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.011 Verification LBA range: start 0x0 length 0x400 00:28:01.011 Nvme4n1 : 1.20 267.76 16.74 0.00 0.00 224381.76 8932.31 259425.47 00:28:01.011 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.011 Verification LBA range: start 0x0 length 0x400 00:28:01.011 Nvme5n1 : 1.18 217.20 13.57 0.00 0.00 273499.78 22622.06 268746.15 00:28:01.011 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.011 Verification LBA range: start 0x0 length 0x400 00:28:01.011 Nvme6n1 : 1.20 213.05 13.32 0.00 0.00 273191.44 38836.15 253211.69 00:28:01.011 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.011 Verification LBA range: start 0x0 length 0x400 00:28:01.011 Nvme7n1 : 1.20 213.53 13.35 0.00 0.00 269516.61 23107.51 281173.71 00:28:01.011 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.011 Verification LBA range: start 0x0 length 0x400 00:28:01.011 Nvme8n1 : 1.21 264.58 16.54 0.00 0.00 214122.04 21651.15 264085.81 00:28:01.011 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.011 Verification LBA range: start 0x0 length 0x400 00:28:01.011 Nvme9n1 : 1.19 215.04 13.44 0.00 0.00 258628.46 21554.06 273406.48 00:28:01.011 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.011 Verification LBA range: start 0x0 length 0x400 00:28:01.011 Nvme10n1 : 1.21 210.99 13.19 0.00 0.00 259952.64 23592.96 299815.06 00:28:01.011 =================================================================================================================== 00:28:01.011 Total : 2261.10 141.32 0.00 0.00 260318.64 8932.31 299815.06 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:01.269 rmmod nvme_tcp 00:28:01.269 rmmod nvme_fabrics 00:28:01.269 rmmod nvme_keyring 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1671320 ']' 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1671320 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1671320 ']' 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1671320 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1671320 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1671320' 00:28:01.269 killing process with pid 1671320 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1671320 00:28:01.269 02:15:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1671320 00:28:01.834 02:15:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:01.834 02:15:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:01.834 02:15:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:01.834 02:15:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:01.834 02:15:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:01.834 02:15:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.834 02:15:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.834 02:15:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:03.733 00:28:03.733 real 0m11.818s 00:28:03.733 user 0m34.168s 00:28:03.733 sys 0m3.272s 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:03.733 ************************************ 00:28:03.733 END TEST nvmf_shutdown_tc1 00:28:03.733 ************************************ 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:03.733 ************************************ 00:28:03.733 START TEST nvmf_shutdown_tc2 00:28:03.733 ************************************ 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:03.733 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:03.992 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:03.992 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:03.992 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:03.993 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:03.993 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:03.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:28:03.993 00:28:03.993 --- 10.0.0.2 ping statistics --- 00:28:03.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.993 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:03.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:28:03.993 00:28:03.993 --- 10.0.0.1 ping statistics --- 00:28:03.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.993 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1673289 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1673289 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1673289 ']' 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:03.993 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.993 [2024-07-14 02:15:09.635591] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:03.993 [2024-07-14 02:15:09.635664] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.993 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.251 [2024-07-14 02:15:09.705087] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:04.251 [2024-07-14 02:15:09.796358] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.251 [2024-07-14 02:15:09.796421] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.251 [2024-07-14 02:15:09.796448] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:04.251 [2024-07-14 02:15:09.796461] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:04.251 [2024-07-14 02:15:09.796473] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.251 [2024-07-14 02:15:09.796561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.251 [2024-07-14 02:15:09.796672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:04.251 [2024-07-14 02:15:09.796740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:04.251 [2024-07-14 02:15:09.796743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.251 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:04.251 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:04.251 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:04.251 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:04.251 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.528 [2024-07-14 02:15:09.953768] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.528 02:15:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.528 Malloc1 00:28:04.528 [2024-07-14 02:15:10.037678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.528 Malloc2 00:28:04.528 Malloc3 00:28:04.528 Malloc4 00:28:04.796 Malloc5 00:28:04.796 Malloc6 00:28:04.796 Malloc7 00:28:04.796 Malloc8 00:28:04.796 Malloc9 00:28:04.796 Malloc10 00:28:04.796 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.796 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:04.796 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:04.796 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1673357 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1673357 /var/tmp/bdevperf.sock 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1673357 ']' 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:05.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.055 { 00:28:05.055 "params": { 00:28:05.055 "name": "Nvme$subsystem", 00:28:05.055 "trtype": "$TEST_TRANSPORT", 00:28:05.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.055 "adrfam": "ipv4", 00:28:05.055 "trsvcid": "$NVMF_PORT", 00:28:05.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.055 "hdgst": ${hdgst:-false}, 00:28:05.055 "ddgst": ${ddgst:-false} 00:28:05.055 }, 00:28:05.055 "method": "bdev_nvme_attach_controller" 00:28:05.055 } 00:28:05.055 EOF 00:28:05.055 )") 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.055 { 00:28:05.055 "params": { 00:28:05.055 "name": "Nvme$subsystem", 00:28:05.055 "trtype": "$TEST_TRANSPORT", 00:28:05.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.055 "adrfam": "ipv4", 00:28:05.055 "trsvcid": "$NVMF_PORT", 00:28:05.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.055 "hdgst": ${hdgst:-false}, 00:28:05.055 "ddgst": ${ddgst:-false} 00:28:05.055 }, 00:28:05.055 "method": "bdev_nvme_attach_controller" 00:28:05.055 } 00:28:05.055 EOF 00:28:05.055 )") 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.055 { 00:28:05.055 "params": { 00:28:05.055 "name": "Nvme$subsystem", 00:28:05.055 "trtype": "$TEST_TRANSPORT", 00:28:05.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.055 "adrfam": "ipv4", 00:28:05.055 "trsvcid": "$NVMF_PORT", 00:28:05.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.055 "hdgst": ${hdgst:-false}, 00:28:05.055 "ddgst": ${ddgst:-false} 00:28:05.055 }, 00:28:05.055 "method": "bdev_nvme_attach_controller" 00:28:05.055 } 00:28:05.055 EOF 00:28:05.055 )") 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.055 { 00:28:05.055 "params": { 00:28:05.055 "name": "Nvme$subsystem", 00:28:05.055 "trtype": "$TEST_TRANSPORT", 00:28:05.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.055 "adrfam": "ipv4", 00:28:05.055 "trsvcid": "$NVMF_PORT", 00:28:05.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.055 "hdgst": ${hdgst:-false}, 00:28:05.055 "ddgst": ${ddgst:-false} 00:28:05.055 }, 00:28:05.055 "method": "bdev_nvme_attach_controller" 00:28:05.055 } 00:28:05.055 EOF 00:28:05.055 )") 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.055 { 00:28:05.055 "params": { 00:28:05.055 "name": "Nvme$subsystem", 00:28:05.055 "trtype": "$TEST_TRANSPORT", 00:28:05.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.055 "adrfam": "ipv4", 00:28:05.055 "trsvcid": "$NVMF_PORT", 00:28:05.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.055 "hdgst": ${hdgst:-false}, 00:28:05.055 "ddgst": ${ddgst:-false} 00:28:05.055 }, 00:28:05.055 "method": "bdev_nvme_attach_controller" 00:28:05.055 } 00:28:05.055 EOF 00:28:05.055 )") 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.055 { 00:28:05.055 "params": { 00:28:05.055 "name": "Nvme$subsystem", 00:28:05.055 "trtype": "$TEST_TRANSPORT", 00:28:05.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.055 "adrfam": "ipv4", 00:28:05.055 "trsvcid": "$NVMF_PORT", 00:28:05.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.055 "hdgst": ${hdgst:-false}, 00:28:05.055 "ddgst": ${ddgst:-false} 00:28:05.055 }, 00:28:05.055 "method": "bdev_nvme_attach_controller" 00:28:05.055 } 00:28:05.055 EOF 00:28:05.055 )") 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.055 { 00:28:05.055 "params": { 00:28:05.055 "name": "Nvme$subsystem", 00:28:05.055 "trtype": "$TEST_TRANSPORT", 00:28:05.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.055 "adrfam": "ipv4", 00:28:05.055 "trsvcid": "$NVMF_PORT", 00:28:05.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.055 "hdgst": ${hdgst:-false}, 00:28:05.055 "ddgst": ${ddgst:-false} 00:28:05.055 }, 00:28:05.055 "method": "bdev_nvme_attach_controller" 00:28:05.055 } 00:28:05.055 EOF 00:28:05.055 )") 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.055 { 00:28:05.055 "params": { 00:28:05.055 "name": "Nvme$subsystem", 00:28:05.055 "trtype": "$TEST_TRANSPORT", 00:28:05.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.055 "adrfam": "ipv4", 00:28:05.055 "trsvcid": "$NVMF_PORT", 00:28:05.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.055 "hdgst": ${hdgst:-false}, 00:28:05.055 "ddgst": ${ddgst:-false} 00:28:05.055 }, 00:28:05.055 "method": "bdev_nvme_attach_controller" 00:28:05.055 } 00:28:05.055 EOF 00:28:05.055 )") 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.055 { 00:28:05.055 "params": { 00:28:05.055 "name": "Nvme$subsystem", 00:28:05.055 "trtype": "$TEST_TRANSPORT", 00:28:05.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.055 "adrfam": "ipv4", 00:28:05.055 "trsvcid": "$NVMF_PORT", 00:28:05.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.055 "hdgst": ${hdgst:-false}, 00:28:05.055 "ddgst": ${ddgst:-false} 00:28:05.055 }, 00:28:05.055 "method": "bdev_nvme_attach_controller" 00:28:05.055 } 00:28:05.055 EOF 00:28:05.055 )") 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.055 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.055 { 00:28:05.055 "params": { 00:28:05.055 "name": "Nvme$subsystem", 00:28:05.055 "trtype": "$TEST_TRANSPORT", 00:28:05.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.055 "adrfam": "ipv4", 00:28:05.055 "trsvcid": "$NVMF_PORT", 00:28:05.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.055 "hdgst": ${hdgst:-false}, 00:28:05.055 "ddgst": ${ddgst:-false} 00:28:05.055 }, 00:28:05.055 "method": "bdev_nvme_attach_controller" 00:28:05.055 } 00:28:05.056 EOF 00:28:05.056 )") 00:28:05.056 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.056 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:05.056 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:05.056 02:15:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:05.056 "params": { 00:28:05.056 "name": "Nvme1", 00:28:05.056 "trtype": "tcp", 00:28:05.056 "traddr": "10.0.0.2", 00:28:05.056 "adrfam": "ipv4", 00:28:05.056 "trsvcid": "4420", 00:28:05.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:05.056 "hdgst": false, 00:28:05.056 "ddgst": false 00:28:05.056 }, 00:28:05.056 "method": "bdev_nvme_attach_controller" 00:28:05.056 },{ 00:28:05.056 "params": { 00:28:05.056 "name": "Nvme2", 00:28:05.056 "trtype": "tcp", 00:28:05.056 "traddr": "10.0.0.2", 00:28:05.056 "adrfam": "ipv4", 00:28:05.056 "trsvcid": "4420", 00:28:05.056 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:05.056 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:05.056 "hdgst": false, 00:28:05.056 "ddgst": false 00:28:05.056 }, 00:28:05.056 "method": "bdev_nvme_attach_controller" 00:28:05.056 },{ 00:28:05.056 "params": { 00:28:05.056 "name": "Nvme3", 00:28:05.056 "trtype": "tcp", 00:28:05.056 "traddr": "10.0.0.2", 00:28:05.056 "adrfam": "ipv4", 00:28:05.056 "trsvcid": "4420", 00:28:05.056 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:05.056 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:05.056 "hdgst": false, 00:28:05.056 "ddgst": false 00:28:05.056 }, 00:28:05.056 "method": "bdev_nvme_attach_controller" 00:28:05.056 },{ 00:28:05.056 "params": { 00:28:05.056 "name": "Nvme4", 00:28:05.056 "trtype": "tcp", 00:28:05.056 "traddr": "10.0.0.2", 00:28:05.056 "adrfam": "ipv4", 00:28:05.056 "trsvcid": "4420", 00:28:05.056 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:05.056 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:05.056 "hdgst": false, 00:28:05.056 "ddgst": false 00:28:05.056 }, 00:28:05.056 "method": "bdev_nvme_attach_controller" 00:28:05.056 },{ 00:28:05.056 "params": { 00:28:05.056 "name": "Nvme5", 00:28:05.056 "trtype": "tcp", 00:28:05.056 "traddr": "10.0.0.2", 00:28:05.056 "adrfam": "ipv4", 00:28:05.056 "trsvcid": "4420", 00:28:05.056 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:05.056 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:05.056 "hdgst": false, 00:28:05.056 "ddgst": false 00:28:05.056 }, 00:28:05.056 "method": "bdev_nvme_attach_controller" 00:28:05.056 },{ 00:28:05.056 "params": { 00:28:05.056 "name": "Nvme6", 00:28:05.056 "trtype": "tcp", 00:28:05.056 "traddr": "10.0.0.2", 00:28:05.056 "adrfam": "ipv4", 00:28:05.056 "trsvcid": "4420", 00:28:05.056 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:05.056 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:05.056 "hdgst": false, 00:28:05.056 "ddgst": false 00:28:05.056 }, 00:28:05.056 "method": "bdev_nvme_attach_controller" 00:28:05.056 },{ 00:28:05.056 "params": { 00:28:05.056 "name": "Nvme7", 00:28:05.056 "trtype": "tcp", 00:28:05.056 "traddr": "10.0.0.2", 00:28:05.056 "adrfam": "ipv4", 00:28:05.056 "trsvcid": "4420", 00:28:05.056 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:05.056 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:05.056 "hdgst": false, 00:28:05.056 "ddgst": false 00:28:05.056 }, 00:28:05.056 "method": "bdev_nvme_attach_controller" 00:28:05.056 },{ 00:28:05.056 "params": { 00:28:05.056 "name": "Nvme8", 00:28:05.056 "trtype": "tcp", 00:28:05.056 "traddr": "10.0.0.2", 00:28:05.056 "adrfam": "ipv4", 00:28:05.056 "trsvcid": "4420", 00:28:05.056 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:05.056 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:05.056 "hdgst": false, 00:28:05.056 "ddgst": false 00:28:05.056 }, 00:28:05.056 "method": "bdev_nvme_attach_controller" 00:28:05.056 },{ 00:28:05.056 "params": { 00:28:05.056 "name": "Nvme9", 00:28:05.056 "trtype": "tcp", 00:28:05.056 "traddr": "10.0.0.2", 00:28:05.056 "adrfam": "ipv4", 00:28:05.056 "trsvcid": "4420", 00:28:05.056 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:05.056 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:05.056 "hdgst": false, 00:28:05.056 "ddgst": false 00:28:05.056 }, 00:28:05.056 "method": "bdev_nvme_attach_controller" 00:28:05.056 },{ 00:28:05.056 "params": { 00:28:05.056 "name": "Nvme10", 00:28:05.056 "trtype": "tcp", 00:28:05.056 "traddr": "10.0.0.2", 00:28:05.056 "adrfam": "ipv4", 00:28:05.056 "trsvcid": "4420", 00:28:05.056 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:05.056 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:05.056 "hdgst": false, 00:28:05.056 "ddgst": false 00:28:05.056 }, 00:28:05.056 "method": "bdev_nvme_attach_controller" 00:28:05.056 }' 00:28:05.056 [2024-07-14 02:15:10.550184] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:05.056 [2024-07-14 02:15:10.550271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1673357 ] 00:28:05.056 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.056 [2024-07-14 02:15:10.616396] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.056 [2024-07-14 02:15:10.705933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.963 Running I/O for 10 seconds... 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:06.963 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:07.222 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:07.222 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:07.222 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:07.222 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:07.222 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.222 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.222 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.480 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:07.480 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:07.480 02:15:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:07.736 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:07.736 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:07.736 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:07.736 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:07.736 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.736 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.736 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.736 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:07.736 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:07.736 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1673357 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1673357 ']' 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1673357 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1673357 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1673357' 00:28:07.995 killing process with pid 1673357 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1673357 00:28:07.995 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1673357 00:28:07.995 Received shutdown signal, test time was about 1.230744 seconds 00:28:07.995 00:28:07.995 Latency(us) 00:28:07.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.995 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.995 Verification LBA range: start 0x0 length 0x400 00:28:07.995 Nvme1n1 : 1.23 208.97 13.06 0.00 0.00 303425.99 37865.24 298261.62 00:28:07.995 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.995 Verification LBA range: start 0x0 length 0x400 00:28:07.995 Nvme2n1 : 1.21 263.65 16.48 0.00 0.00 236595.28 19515.16 257872.02 00:28:07.995 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.995 Verification LBA range: start 0x0 length 0x400 00:28:07.995 Nvme3n1 : 1.22 210.14 13.13 0.00 0.00 292216.23 22719.15 267192.70 00:28:07.995 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.995 Verification LBA range: start 0x0 length 0x400 00:28:07.995 Nvme4n1 : 1.22 265.27 16.58 0.00 0.00 227622.68 4951.61 260978.92 00:28:07.995 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.995 Verification LBA range: start 0x0 length 0x400 00:28:07.995 Nvme5n1 : 1.17 164.02 10.25 0.00 0.00 361847.72 54370.61 309135.74 00:28:07.995 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.995 Verification LBA range: start 0x0 length 0x400 00:28:07.995 Nvme6n1 : 1.18 216.79 13.55 0.00 0.00 269786.26 19612.25 273406.48 00:28:07.995 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.995 Verification LBA range: start 0x0 length 0x400 00:28:07.996 Nvme7n1 : 1.20 213.11 13.32 0.00 0.00 270555.40 21554.06 271853.04 00:28:07.996 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.996 Verification LBA range: start 0x0 length 0x400 00:28:07.996 Nvme8n1 : 1.23 260.17 16.26 0.00 0.00 217905.08 12427.57 267192.70 00:28:07.996 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.996 Verification LBA range: start 0x0 length 0x400 00:28:07.996 Nvme9n1 : 1.21 222.64 13.91 0.00 0.00 249609.27 4975.88 296708.17 00:28:07.996 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.996 Verification LBA range: start 0x0 length 0x400 00:28:07.996 Nvme10n1 : 1.19 214.75 13.42 0.00 0.00 254967.28 20680.25 267192.70 00:28:07.996 =================================================================================================================== 00:28:07.996 Total : 2239.52 139.97 0.00 0.00 263171.70 4951.61 309135.74 00:28:08.254 02:15:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:09.186 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1673289 00:28:09.186 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:09.186 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:09.186 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:09.186 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:09.186 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:09.186 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:09.186 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:09.186 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:09.186 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:09.186 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:09.186 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:09.186 rmmod nvme_tcp 00:28:09.444 rmmod nvme_fabrics 00:28:09.444 rmmod nvme_keyring 00:28:09.444 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:09.444 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:09.444 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:09.444 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1673289 ']' 00:28:09.444 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1673289 00:28:09.444 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1673289 ']' 00:28:09.444 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1673289 00:28:09.444 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:09.444 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:09.444 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1673289 00:28:09.444 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:09.444 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:09.444 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1673289' 00:28:09.444 killing process with pid 1673289 00:28:09.444 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1673289 00:28:09.444 02:15:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1673289 00:28:10.014 02:15:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:10.014 02:15:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:10.014 02:15:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:10.014 02:15:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:10.014 02:15:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:10.014 02:15:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.014 02:15:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:10.014 02:15:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:11.923 00:28:11.923 real 0m8.027s 00:28:11.923 user 0m24.427s 00:28:11.923 sys 0m1.798s 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:11.923 ************************************ 00:28:11.923 END TEST nvmf_shutdown_tc2 00:28:11.923 ************************************ 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:11.923 ************************************ 00:28:11.923 START TEST nvmf_shutdown_tc3 00:28:11.923 ************************************ 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:11.923 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:11.924 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:11.924 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:11.924 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:11.924 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:11.924 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:12.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:28:12.185 00:28:12.185 --- 10.0.0.2 ping statistics --- 00:28:12.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.185 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:28:12.185 00:28:12.185 --- 10.0.0.1 ping statistics --- 00:28:12.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.185 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1674388 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1674388 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1674388 ']' 00:28:12.185 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.186 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:12.186 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.186 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:12.186 02:15:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.186 [2024-07-14 02:15:17.720390] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:12.186 [2024-07-14 02:15:17.720475] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.186 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.186 [2024-07-14 02:15:17.790206] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:12.446 [2024-07-14 02:15:17.886845] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.446 [2024-07-14 02:15:17.886927] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.446 [2024-07-14 02:15:17.886945] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.446 [2024-07-14 02:15:17.886959] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.446 [2024-07-14 02:15:17.886970] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.446 [2024-07-14 02:15:17.887031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.446 [2024-07-14 02:15:17.887150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.446 [2024-07-14 02:15:17.887218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:12.446 [2024-07-14 02:15:17.887221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.446 [2024-07-14 02:15:18.040758] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.446 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.447 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.447 Malloc1 00:28:12.447 [2024-07-14 02:15:18.134161] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.705 Malloc2 00:28:12.705 Malloc3 00:28:12.705 Malloc4 00:28:12.705 Malloc5 00:28:12.705 Malloc6 00:28:12.965 Malloc7 00:28:12.965 Malloc8 00:28:12.965 Malloc9 00:28:12.965 Malloc10 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1674551 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1674551 /var/tmp/bdevperf.sock 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1674551 ']' 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:12.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.965 { 00:28:12.965 "params": { 00:28:12.965 "name": "Nvme$subsystem", 00:28:12.965 "trtype": "$TEST_TRANSPORT", 00:28:12.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.965 "adrfam": "ipv4", 00:28:12.965 "trsvcid": "$NVMF_PORT", 00:28:12.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.965 "hdgst": ${hdgst:-false}, 00:28:12.965 "ddgst": ${ddgst:-false} 00:28:12.965 }, 00:28:12.965 "method": "bdev_nvme_attach_controller" 00:28:12.965 } 00:28:12.965 EOF 00:28:12.965 )") 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.965 { 00:28:12.965 "params": { 00:28:12.965 "name": "Nvme$subsystem", 00:28:12.965 "trtype": "$TEST_TRANSPORT", 00:28:12.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.965 "adrfam": "ipv4", 00:28:12.965 "trsvcid": "$NVMF_PORT", 00:28:12.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.965 "hdgst": ${hdgst:-false}, 00:28:12.965 "ddgst": ${ddgst:-false} 00:28:12.965 }, 00:28:12.965 "method": "bdev_nvme_attach_controller" 00:28:12.965 } 00:28:12.965 EOF 00:28:12.965 )") 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.965 { 00:28:12.965 "params": { 00:28:12.965 "name": "Nvme$subsystem", 00:28:12.965 "trtype": "$TEST_TRANSPORT", 00:28:12.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.965 "adrfam": "ipv4", 00:28:12.965 "trsvcid": "$NVMF_PORT", 00:28:12.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.965 "hdgst": ${hdgst:-false}, 00:28:12.965 "ddgst": ${ddgst:-false} 00:28:12.965 }, 00:28:12.965 "method": "bdev_nvme_attach_controller" 00:28:12.965 } 00:28:12.965 EOF 00:28:12.965 )") 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.965 { 00:28:12.965 "params": { 00:28:12.965 "name": "Nvme$subsystem", 00:28:12.965 "trtype": "$TEST_TRANSPORT", 00:28:12.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.965 "adrfam": "ipv4", 00:28:12.965 "trsvcid": "$NVMF_PORT", 00:28:12.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.965 "hdgst": ${hdgst:-false}, 00:28:12.965 "ddgst": ${ddgst:-false} 00:28:12.965 }, 00:28:12.965 "method": "bdev_nvme_attach_controller" 00:28:12.965 } 00:28:12.965 EOF 00:28:12.965 )") 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.965 { 00:28:12.965 "params": { 00:28:12.965 "name": "Nvme$subsystem", 00:28:12.965 "trtype": "$TEST_TRANSPORT", 00:28:12.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.965 "adrfam": "ipv4", 00:28:12.965 "trsvcid": "$NVMF_PORT", 00:28:12.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.965 "hdgst": ${hdgst:-false}, 00:28:12.965 "ddgst": ${ddgst:-false} 00:28:12.965 }, 00:28:12.965 "method": "bdev_nvme_attach_controller" 00:28:12.965 } 00:28:12.965 EOF 00:28:12.965 )") 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.965 { 00:28:12.965 "params": { 00:28:12.965 "name": "Nvme$subsystem", 00:28:12.965 "trtype": "$TEST_TRANSPORT", 00:28:12.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.965 "adrfam": "ipv4", 00:28:12.965 "trsvcid": "$NVMF_PORT", 00:28:12.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.965 "hdgst": ${hdgst:-false}, 00:28:12.965 "ddgst": ${ddgst:-false} 00:28:12.965 }, 00:28:12.965 "method": "bdev_nvme_attach_controller" 00:28:12.965 } 00:28:12.965 EOF 00:28:12.965 )") 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.965 { 00:28:12.965 "params": { 00:28:12.965 "name": "Nvme$subsystem", 00:28:12.965 "trtype": "$TEST_TRANSPORT", 00:28:12.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.965 "adrfam": "ipv4", 00:28:12.965 "trsvcid": "$NVMF_PORT", 00:28:12.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.965 "hdgst": ${hdgst:-false}, 00:28:12.965 "ddgst": ${ddgst:-false} 00:28:12.965 }, 00:28:12.965 "method": "bdev_nvme_attach_controller" 00:28:12.965 } 00:28:12.965 EOF 00:28:12.965 )") 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.965 { 00:28:12.965 "params": { 00:28:12.965 "name": "Nvme$subsystem", 00:28:12.965 "trtype": "$TEST_TRANSPORT", 00:28:12.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.965 "adrfam": "ipv4", 00:28:12.965 "trsvcid": "$NVMF_PORT", 00:28:12.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.965 "hdgst": ${hdgst:-false}, 00:28:12.965 "ddgst": ${ddgst:-false} 00:28:12.965 }, 00:28:12.965 "method": "bdev_nvme_attach_controller" 00:28:12.965 } 00:28:12.965 EOF 00:28:12.965 )") 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.965 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.966 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.966 { 00:28:12.966 "params": { 00:28:12.966 "name": "Nvme$subsystem", 00:28:12.966 "trtype": "$TEST_TRANSPORT", 00:28:12.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.966 "adrfam": "ipv4", 00:28:12.966 "trsvcid": "$NVMF_PORT", 00:28:12.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.966 "hdgst": ${hdgst:-false}, 00:28:12.966 "ddgst": ${ddgst:-false} 00:28:12.966 }, 00:28:12.966 "method": "bdev_nvme_attach_controller" 00:28:12.966 } 00:28:12.966 EOF 00:28:12.966 )") 00:28:12.966 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.966 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.966 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.966 { 00:28:12.966 "params": { 00:28:12.966 "name": "Nvme$subsystem", 00:28:12.966 "trtype": "$TEST_TRANSPORT", 00:28:12.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.966 "adrfam": "ipv4", 00:28:12.966 "trsvcid": "$NVMF_PORT", 00:28:12.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.966 "hdgst": ${hdgst:-false}, 00:28:12.966 "ddgst": ${ddgst:-false} 00:28:12.966 }, 00:28:12.966 "method": "bdev_nvme_attach_controller" 00:28:12.966 } 00:28:12.966 EOF 00:28:12.966 )") 00:28:12.966 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.966 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:12.966 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:12.966 02:15:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:12.966 "params": { 00:28:12.966 "name": "Nvme1", 00:28:12.966 "trtype": "tcp", 00:28:12.966 "traddr": "10.0.0.2", 00:28:12.966 "adrfam": "ipv4", 00:28:12.966 "trsvcid": "4420", 00:28:12.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:12.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:12.966 "hdgst": false, 00:28:12.966 "ddgst": false 00:28:12.966 }, 00:28:12.966 "method": "bdev_nvme_attach_controller" 00:28:12.966 },{ 00:28:12.966 "params": { 00:28:12.966 "name": "Nvme2", 00:28:12.966 "trtype": "tcp", 00:28:12.966 "traddr": "10.0.0.2", 00:28:12.966 "adrfam": "ipv4", 00:28:12.966 "trsvcid": "4420", 00:28:12.966 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:12.966 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:12.966 "hdgst": false, 00:28:12.966 "ddgst": false 00:28:12.966 }, 00:28:12.966 "method": "bdev_nvme_attach_controller" 00:28:12.966 },{ 00:28:12.966 "params": { 00:28:12.966 "name": "Nvme3", 00:28:12.966 "trtype": "tcp", 00:28:12.966 "traddr": "10.0.0.2", 00:28:12.966 "adrfam": "ipv4", 00:28:12.966 "trsvcid": "4420", 00:28:12.966 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:12.966 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:12.966 "hdgst": false, 00:28:12.966 "ddgst": false 00:28:12.966 }, 00:28:12.966 "method": "bdev_nvme_attach_controller" 00:28:12.966 },{ 00:28:12.966 "params": { 00:28:12.966 "name": "Nvme4", 00:28:12.966 "trtype": "tcp", 00:28:12.966 "traddr": "10.0.0.2", 00:28:12.966 "adrfam": "ipv4", 00:28:12.966 "trsvcid": "4420", 00:28:12.966 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:12.966 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:12.966 "hdgst": false, 00:28:12.966 "ddgst": false 00:28:12.966 }, 00:28:12.966 "method": "bdev_nvme_attach_controller" 00:28:12.966 },{ 00:28:12.966 "params": { 00:28:12.966 "name": "Nvme5", 00:28:12.966 "trtype": "tcp", 00:28:12.966 "traddr": "10.0.0.2", 00:28:12.966 "adrfam": "ipv4", 00:28:12.966 "trsvcid": "4420", 00:28:12.966 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:12.966 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:12.966 "hdgst": false, 00:28:12.966 "ddgst": false 00:28:12.966 }, 00:28:12.966 "method": "bdev_nvme_attach_controller" 00:28:12.966 },{ 00:28:12.966 "params": { 00:28:12.966 "name": "Nvme6", 00:28:12.966 "trtype": "tcp", 00:28:12.966 "traddr": "10.0.0.2", 00:28:12.966 "adrfam": "ipv4", 00:28:12.966 "trsvcid": "4420", 00:28:12.966 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:12.966 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:12.966 "hdgst": false, 00:28:12.966 "ddgst": false 00:28:12.966 }, 00:28:12.966 "method": "bdev_nvme_attach_controller" 00:28:12.966 },{ 00:28:12.966 "params": { 00:28:12.966 "name": "Nvme7", 00:28:12.966 "trtype": "tcp", 00:28:12.966 "traddr": "10.0.0.2", 00:28:12.966 "adrfam": "ipv4", 00:28:12.966 "trsvcid": "4420", 00:28:12.966 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:12.966 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:12.966 "hdgst": false, 00:28:12.966 "ddgst": false 00:28:12.966 }, 00:28:12.966 "method": "bdev_nvme_attach_controller" 00:28:12.966 },{ 00:28:12.966 "params": { 00:28:12.966 "name": "Nvme8", 00:28:12.966 "trtype": "tcp", 00:28:12.966 "traddr": "10.0.0.2", 00:28:12.966 "adrfam": "ipv4", 00:28:12.966 "trsvcid": "4420", 00:28:12.966 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:12.966 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:12.966 "hdgst": false, 00:28:12.966 "ddgst": false 00:28:12.966 }, 00:28:12.966 "method": "bdev_nvme_attach_controller" 00:28:12.966 },{ 00:28:12.966 "params": { 00:28:12.966 "name": "Nvme9", 00:28:12.966 "trtype": "tcp", 00:28:12.966 "traddr": "10.0.0.2", 00:28:12.966 "adrfam": "ipv4", 00:28:12.966 "trsvcid": "4420", 00:28:12.966 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:12.966 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:12.966 "hdgst": false, 00:28:12.966 "ddgst": false 00:28:12.966 }, 00:28:12.966 "method": "bdev_nvme_attach_controller" 00:28:12.966 },{ 00:28:12.966 "params": { 00:28:12.966 "name": "Nvme10", 00:28:12.966 "trtype": "tcp", 00:28:12.966 "traddr": "10.0.0.2", 00:28:12.966 "adrfam": "ipv4", 00:28:12.966 "trsvcid": "4420", 00:28:12.966 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:12.966 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:12.966 "hdgst": false, 00:28:12.966 "ddgst": false 00:28:12.966 }, 00:28:12.966 "method": "bdev_nvme_attach_controller" 00:28:12.966 }' 00:28:12.966 [2024-07-14 02:15:18.650190] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:12.966 [2024-07-14 02:15:18.650309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1674551 ] 00:28:13.224 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.224 [2024-07-14 02:15:18.715490] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.224 [2024-07-14 02:15:18.802945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.129 Running I/O for 10 seconds... 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:15.129 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:15.399 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:15.399 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:15.399 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:15.399 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.399 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:15.399 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:15.399 02:15:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.399 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:15.399 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:15.400 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:15.400 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:15.400 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:15.400 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1674388 00:28:15.400 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1674388 ']' 00:28:15.400 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1674388 00:28:15.400 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:28:15.400 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:15.400 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1674388 00:28:15.400 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:15.400 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:15.400 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1674388' 00:28:15.400 killing process with pid 1674388 00:28:15.400 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1674388 00:28:15.400 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1674388 00:28:15.400 [2024-07-14 02:15:21.029640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.029720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.029745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.029759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.029772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.029787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.029800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.029814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.029826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.029839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.029863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.029964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.029989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.030986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.031003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.031017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.031029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.031042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.031055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.031069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.031082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.031096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.031109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.031122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c790 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.033566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.033606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.033623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.033636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.033649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.033665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.033679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.033691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.400 [2024-07-14 02:15:21.033707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.033994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.034472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f190 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.035798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.035825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.035844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.035857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.035878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.035892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.035908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.035921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.035933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.035951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.035964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.035977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.035990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.401 [2024-07-14 02:15:21.036243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.036662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8cc30 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.402 [2024-07-14 02:15:21.038815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.038827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.038840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.038863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.038904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.038918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.038931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d0d0 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.039958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d590 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.039995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d590 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.040012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d590 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.040025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d590 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.040037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d590 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.040050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d590 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.040062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d590 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.040075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d590 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.040087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d590 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.040099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d590 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.042924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e390 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.043922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.043955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.043971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.043984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.043996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.044011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.044024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.044038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.044050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.044066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.044079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.403 [2024-07-14 02:15:21.044093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.044784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e830 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.045521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.045553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.045570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.045584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.045598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.045610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.045623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.045635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.045647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.045661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.045674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.045686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.045699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.045713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.404 [2024-07-14 02:15:21.045726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.045996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.046395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ecd0 is same with the state(5) to be set 00:28:15.405 [2024-07-14 02:15:21.047195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.405 [2024-07-14 02:15:21.047859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.405 [2024-07-14 02:15:21.047899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.047915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.047931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.047945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.047961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.047975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.047990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.048977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.048996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.049014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.049028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.049044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.049058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.049074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.049089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.049105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.049119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.049135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.049148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.049173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.406 [2024-07-14 02:15:21.049187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.406 [2024-07-14 02:15:21.049203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.049217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.049233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.049247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.049293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.407 [2024-07-14 02:15:21.049375] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x179ceb0 was disconnected and freed. reset controller. 00:28:15.407 [2024-07-14 02:15:21.049774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.049798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.049819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.049835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.049863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.049887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.049909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.049924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.049942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.049956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.049972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.049986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.050970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.050986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.051000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.407 [2024-07-14 02:15:21.051015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.407 [2024-07-14 02:15:21.051029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.408 [2024-07-14 02:15:21.051645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.408 [2024-07-14 02:15:21.051659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.051674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.409 [2024-07-14 02:15:21.051688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.051703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.409 [2024-07-14 02:15:21.051716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.051732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.409 [2024-07-14 02:15:21.051746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.051780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.409 [2024-07-14 02:15:21.051863] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x190bff0 was disconnected and freed. reset controller. 00:28:15.409 [2024-07-14 02:15:21.052278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.052306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.052323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.052337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.052352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.052366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.052380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.052393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.052416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4fd0 is same with the state(5) to be set 00:28:15.409 [2024-07-14 02:15:21.052466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.052487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.052511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.052524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.052538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.052551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.052565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.052578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.052591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196ce10 is same with the state(5) to be set 00:28:15.409 [2024-07-14 02:15:21.052635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.052655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.052670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.052685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.052699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.052712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.052726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.052746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.052760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ddb50 is same with the state(5) to be set 00:28:15.409 [2024-07-14 02:15:21.052819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.052839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.052854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.052887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.052903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.052917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.052932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.052945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.052958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299610 is same with the state(5) to be set 00:28:15.409 [2024-07-14 02:15:21.052998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.053019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.053034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.053048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.053062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.053075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.053088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.053101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.053115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19248b0 is same with the state(5) to be set 00:28:15.409 [2024-07-14 02:15:21.053161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.053185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.053200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.053213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.053228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.053241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.053255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.053273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.053286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa370 is same with the state(5) to be set 00:28:15.409 [2024-07-14 02:15:21.053332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.053353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.053368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.053384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.053398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.053412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.053426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.409 [2024-07-14 02:15:21.053440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.409 [2024-07-14 02:15:21.053453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1290 is same with the state(5) to be set 00:28:15.410 [2024-07-14 02:15:21.053508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.410 [2024-07-14 02:15:21.053529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.053544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.410 [2024-07-14 02:15:21.053558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.053572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.410 [2024-07-14 02:15:21.053587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.053601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.410 [2024-07-14 02:15:21.053615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.053628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5700 is same with the state(5) to be set 00:28:15.410 [2024-07-14 02:15:21.053673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.410 [2024-07-14 02:15:21.053694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.053709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.410 [2024-07-14 02:15:21.053722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.053737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.410 [2024-07-14 02:15:21.053750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.053769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.410 [2024-07-14 02:15:21.053782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.053795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7910 is same with the state(5) to be set 00:28:15.410 [2024-07-14 02:15:21.053842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.410 [2024-07-14 02:15:21.053881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.053898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.410 [2024-07-14 02:15:21.053912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.053926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.410 [2024-07-14 02:15:21.053940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.053954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.410 [2024-07-14 02:15:21.053967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.053981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c1830 is same with the state(5) to be set 00:28:15.410 [2024-07-14 02:15:21.054251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.054981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.054997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.055011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.055027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.055041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.055056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.055070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.055086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.055100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.055116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.055129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.055145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.410 [2024-07-14 02:15:21.055167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.410 [2024-07-14 02:15:21.055183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.055980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.055995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.056009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.056024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.056042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.056058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.056072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.056087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.056100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.056116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.056130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.056156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.056169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.056184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.056197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.056221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.056235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.056251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.411 [2024-07-14 02:15:21.056265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.411 [2024-07-14 02:15:21.056350] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x179ba00 was disconnected and freed. reset controller. 00:28:15.411 [2024-07-14 02:15:21.058946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:15.411 [2024-07-14 02:15:21.058982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:15.411 [2024-07-14 02:15:21.059012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1299610 (9): Bad file descriptor 00:28:15.411 [2024-07-14 02:15:21.059035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c1830 (9): Bad file descriptor 00:28:15.411 [2024-07-14 02:15:21.060639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:15.411 [2024-07-14 02:15:21.060672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ddb50 (9): Bad file descriptor 00:28:15.411 [2024-07-14 02:15:21.061973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.411 [2024-07-14 02:15:21.062018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c1830 with addr=10.0.0.2, port=4420 00:28:15.411 [2024-07-14 02:15:21.062037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c1830 is same with the state(5) to be set 00:28:15.411 [2024-07-14 02:15:21.062195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.411 [2024-07-14 02:15:21.062222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1299610 with addr=10.0.0.2, port=4420 00:28:15.411 [2024-07-14 02:15:21.062247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299610 is same with the state(5) to be set 00:28:15.411 [2024-07-14 02:15:21.062771] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:15.411 [2024-07-14 02:15:21.062861] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:15.411 [2024-07-14 02:15:21.062931] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:15.411 [2024-07-14 02:15:21.063006] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:15.411 [2024-07-14 02:15:21.063181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.411 [2024-07-14 02:15:21.063209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ddb50 with addr=10.0.0.2, port=4420 00:28:15.412 [2024-07-14 02:15:21.063226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ddb50 is same with the state(5) to be set 00:28:15.412 [2024-07-14 02:15:21.063245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c1830 (9): Bad file descriptor 00:28:15.412 [2024-07-14 02:15:21.063265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1299610 (9): Bad file descriptor 00:28:15.412 [2024-07-14 02:15:21.063286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c4fd0 (9): Bad file descriptor 00:28:15.412 [2024-07-14 02:15:21.063320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196ce10 (9): Bad file descriptor 00:28:15.412 [2024-07-14 02:15:21.063353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19248b0 (9): Bad file descriptor 00:28:15.412 [2024-07-14 02:15:21.063386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fa370 (9): Bad file descriptor 00:28:15.412 [2024-07-14 02:15:21.063416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a1290 (9): Bad file descriptor 00:28:15.412 [2024-07-14 02:15:21.063447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5700 (9): Bad file descriptor 00:28:15.412 [2024-07-14 02:15:21.063475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f7910 (9): Bad file descriptor 00:28:15.412 [2024-07-14 02:15:21.063568] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:15.412 [2024-07-14 02:15:21.063643] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:15.412 [2024-07-14 02:15:21.063719] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:15.412 [2024-07-14 02:15:21.063863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ddb50 (9): Bad file descriptor 00:28:15.412 [2024-07-14 02:15:21.063899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:15.412 [2024-07-14 02:15:21.063914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:15.412 [2024-07-14 02:15:21.063930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:15.412 [2024-07-14 02:15:21.063951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:15.412 [2024-07-14 02:15:21.063965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:15.412 [2024-07-14 02:15:21.063978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:15.412 [2024-07-14 02:15:21.064071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.412 [2024-07-14 02:15:21.064093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.412 [2024-07-14 02:15:21.064107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:15.412 [2024-07-14 02:15:21.064120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:15.412 [2024-07-14 02:15:21.064140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:15.412 [2024-07-14 02:15:21.064200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.412 [2024-07-14 02:15:21.070870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:15.412 [2024-07-14 02:15:21.070929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:15.412 [2024-07-14 02:15:21.071230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.412 [2024-07-14 02:15:21.071264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1299610 with addr=10.0.0.2, port=4420 00:28:15.412 [2024-07-14 02:15:21.071285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299610 is same with the state(5) to be set 00:28:15.412 [2024-07-14 02:15:21.071446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.412 [2024-07-14 02:15:21.071472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c1830 with addr=10.0.0.2, port=4420 00:28:15.412 [2024-07-14 02:15:21.071488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c1830 is same with the state(5) to be set 00:28:15.412 [2024-07-14 02:15:21.071550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1299610 (9): Bad file descriptor 00:28:15.412 [2024-07-14 02:15:21.071576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c1830 (9): Bad file descriptor 00:28:15.412 [2024-07-14 02:15:21.071647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:15.412 [2024-07-14 02:15:21.071668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:15.412 [2024-07-14 02:15:21.071684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:15.412 [2024-07-14 02:15:21.071705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:15.412 [2024-07-14 02:15:21.071719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:15.412 [2024-07-14 02:15:21.071732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:15.412 [2024-07-14 02:15:21.071788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.412 [2024-07-14 02:15:21.071808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.412 [2024-07-14 02:15:21.072404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:15.412 [2024-07-14 02:15:21.072597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.412 [2024-07-14 02:15:21.072626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ddb50 with addr=10.0.0.2, port=4420 00:28:15.412 [2024-07-14 02:15:21.072642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ddb50 is same with the state(5) to be set 00:28:15.412 [2024-07-14 02:15:21.072700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ddb50 (9): Bad file descriptor 00:28:15.412 [2024-07-14 02:15:21.072758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:15.412 [2024-07-14 02:15:21.072776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:15.412 [2024-07-14 02:15:21.072790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:15.412 [2024-07-14 02:15:21.072845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.412 [2024-07-14 02:15:21.073233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.412 [2024-07-14 02:15:21.073877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.412 [2024-07-14 02:15:21.073894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.073911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.073925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.073941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.073955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.073970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.073984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.074970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.074987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.075002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.075018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.075032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.075048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.075062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.075078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.075091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.075107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.413 [2024-07-14 02:15:21.075122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.413 [2024-07-14 02:15:21.075138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.075152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.075167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.075181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.075197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.075210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.075226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.075239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.075255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.075268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.075285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.075298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.075314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.075328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.075347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.075362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.075376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861a70 is same with the state(5) to be set 00:28:15.414 [2024-07-14 02:15:21.076674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.076706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.076749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.076767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.076783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.076805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.076822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.076848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.076884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.076903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.076921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.076949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.076976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.076992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.414 [2024-07-14 02:15:21.077889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.414 [2024-07-14 02:15:21.077904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.077920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.077933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.077949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.077968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.077984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.077998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.415 [2024-07-14 02:15:21.078906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.415 [2024-07-14 02:15:21.078921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1909d40 is same with the state(5) to be set 00:28:15.684 [2024-07-14 02:15:21.080199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.080971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.080986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.081002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.081015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.081031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.081045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.081060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.081074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.684 [2024-07-14 02:15:21.081090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.684 [2024-07-14 02:15:21.081105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.081981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.081994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.082010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.082024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.082040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.082054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.082069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.082083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.082101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.082115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.082131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.082153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.082169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.082185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.082201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.082215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.082231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.082250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.082283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.082306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.082322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b160 is same with the state(5) to be set 00:28:15.685 [2024-07-14 02:15:21.083569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.083593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.083614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.083630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.083646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.083660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.083676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.083691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.083707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.685 [2024-07-14 02:15:21.083721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.685 [2024-07-14 02:15:21.083737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.083751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.083767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.083781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.083797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.083816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.083832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.083857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.083881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.083898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.083913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.083928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.083943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.083957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.083973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.083987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.084968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.084989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.085003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.085019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.085033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.686 [2024-07-14 02:15:21.085049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.686 [2024-07-14 02:15:21.085063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.085079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.085093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.085109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.085123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.085139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.085153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.085168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.085182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.085198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.085211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.085227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.085241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.085257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.085271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.085286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.085300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.085315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d4a0 is same with the state(5) to be set 00:28:15.687 [2024-07-14 02:15:21.086510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.086533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.086555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.086576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.086593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.086607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.086623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.086637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.086653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.086667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.086683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.086697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.086713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.086727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.086743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.086757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.086773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.086787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.086803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.086817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.086833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.086847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.086862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.086884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.086901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.086915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.086931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.086945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.086964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.086979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.086995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.687 [2024-07-14 02:15:21.087534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.687 [2024-07-14 02:15:21.087548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.087564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.087578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.087593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.087607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.087622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.087636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.087653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.087667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.087683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.087697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.087716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.087731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.087748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.087762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.087778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.087791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.087807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.087821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.087837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.087860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.087883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.087897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.087913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.087927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.087942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.087956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.087971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.087985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.088015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.088044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.088075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.088108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.088138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.088171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.088202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.088233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.088262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.088292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.088321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.088350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.088380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.088409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.088439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.088468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.088486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190e950 is same with the state(5) to be set 00:28:15.688 [2024-07-14 02:15:21.089708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.089730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.089752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.089768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.089786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.688 [2024-07-14 02:15:21.089800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.688 [2024-07-14 02:15:21.089816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.089830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.089846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.089860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.089887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.089902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.089919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.089933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.089949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.089964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.089979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.089993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.090983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.090998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.091013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.091028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.091044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.091058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.091074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.091088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.689 [2024-07-14 02:15:21.091103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.689 [2024-07-14 02:15:21.091117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.091662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.091676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fa60 is same with the state(5) to be set 00:28:15.690 [2024-07-14 02:15:21.093520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.093545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.093568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.093583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.093600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.093614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.093631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.093645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.093660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.093674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.093690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.093705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.093721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.093735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.093751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.093765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.093780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.093794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.093810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.093823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.093844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.093859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.093887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.093903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.093919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.093933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.093949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.093963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.093978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.093992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.094008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.094021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.094037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.094051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.094067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.094081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.094097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.094111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.094126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.094140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.094155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.094169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.094185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.094200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.094216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.094234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.690 [2024-07-14 02:15:21.094250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.690 [2024-07-14 02:15:21.094265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.094974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.094987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.095021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.095051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.095080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.095110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.095141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.095170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.095200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.095231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.095260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.095289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.095319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.095349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.095383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.095412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.095442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.691 [2024-07-14 02:15:21.095471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.691 [2024-07-14 02:15:21.095485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910e20 is same with the state(5) to be set 00:28:15.691 [2024-07-14 02:15:21.097559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:15.691 [2024-07-14 02:15:21.097594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:15.691 [2024-07-14 02:15:21.097613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:15.691 [2024-07-14 02:15:21.097631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:15.692 [2024-07-14 02:15:21.097757] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.692 [2024-07-14 02:15:21.097786] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.692 [2024-07-14 02:15:21.097810] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.692 [2024-07-14 02:15:21.097923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:15.692 [2024-07-14 02:15:21.097950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:15.692 task offset: 19456 on job bdev=Nvme5n1 fails 00:28:15.692 00:28:15.692 Latency(us) 00:28:15.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.692 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.692 Job: Nvme1n1 ended in about 0.77 seconds with error 00:28:15.692 Verification LBA range: start 0x0 length 0x400 00:28:15.692 Nvme1n1 : 0.77 165.48 10.34 82.74 0.00 254501.42 21748.24 236123.78 00:28:15.692 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.692 Job: Nvme2n1 ended in about 0.78 seconds with error 00:28:15.692 Verification LBA range: start 0x0 length 0x400 00:28:15.692 Nvme2n1 : 0.78 164.73 10.30 82.36 0.00 249595.07 36311.80 237677.23 00:28:15.692 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.692 Job: Nvme3n1 ended in about 0.78 seconds with error 00:28:15.692 Verification LBA range: start 0x0 length 0x400 00:28:15.692 Nvme3n1 : 0.78 82.01 5.13 82.01 0.00 367208.68 43884.85 276513.37 00:28:15.692 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.692 Job: Nvme4n1 ended in about 0.76 seconds with error 00:28:15.692 Verification LBA range: start 0x0 length 0x400 00:28:15.692 Nvme4n1 : 0.76 253.51 15.84 84.50 0.00 173136.02 12815.93 251658.24 00:28:15.692 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.692 Job: Nvme5n1 ended in about 0.75 seconds with error 00:28:15.692 Verification LBA range: start 0x0 length 0x400 00:28:15.692 Nvme5n1 : 0.75 169.64 10.60 84.82 0.00 223844.25 9514.86 259425.47 00:28:15.692 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.692 Job: Nvme6n1 ended in about 0.76 seconds with error 00:28:15.692 Verification LBA range: start 0x0 length 0x400 00:28:15.692 Nvme6n1 : 0.76 169.39 10.59 84.70 0.00 218116.55 8058.50 257872.02 00:28:15.692 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.692 Job: Nvme7n1 ended in about 0.78 seconds with error 00:28:15.692 Verification LBA range: start 0x0 length 0x400 00:28:15.692 Nvme7n1 : 0.78 91.91 5.74 71.49 0.00 330849.85 39418.69 271853.04 00:28:15.692 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.692 Job: Nvme8n1 ended in about 0.79 seconds with error 00:28:15.692 Verification LBA range: start 0x0 length 0x400 00:28:15.692 Nvme8n1 : 0.79 162.73 10.17 81.37 0.00 216558.05 21068.61 262532.36 00:28:15.692 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.692 Job: Nvme9n1 ended in about 0.79 seconds with error 00:28:15.692 Verification LBA range: start 0x0 length 0x400 00:28:15.692 Nvme9n1 : 0.79 81.03 5.06 81.03 0.00 317742.65 28350.39 296708.17 00:28:15.692 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.692 Job: Nvme10n1 ended in about 0.79 seconds with error 00:28:15.692 Verification LBA range: start 0x0 length 0x400 00:28:15.692 Nvme10n1 : 0.79 168.86 10.55 80.65 0.00 200754.20 19515.16 253211.69 00:28:15.692 =================================================================================================================== 00:28:15.692 Total : 1509.29 94.33 815.66 0.00 243224.05 8058.50 296708.17 00:28:15.692 [2024-07-14 02:15:21.127378] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:15.692 [2024-07-14 02:15:21.127466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:15.692 [2024-07-14 02:15:21.127863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.692 [2024-07-14 02:15:21.127918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a1290 with addr=10.0.0.2, port=4420 00:28:15.692 [2024-07-14 02:15:21.127940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1290 is same with the state(5) to be set 00:28:15.692 [2024-07-14 02:15:21.128101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.692 [2024-07-14 02:15:21.128128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196ce10 with addr=10.0.0.2, port=4420 00:28:15.692 [2024-07-14 02:15:21.128145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196ce10 is same with the state(5) to be set 00:28:15.692 [2024-07-14 02:15:21.128288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.692 [2024-07-14 02:15:21.128315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5700 with addr=10.0.0.2, port=4420 00:28:15.692 [2024-07-14 02:15:21.128332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5700 is same with the state(5) to be set 00:28:15.692 [2024-07-14 02:15:21.128475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.692 [2024-07-14 02:15:21.128501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c4fd0 with addr=10.0.0.2, port=4420 00:28:15.692 [2024-07-14 02:15:21.128517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4fd0 is same with the state(5) to be set 00:28:15.692 [2024-07-14 02:15:21.130409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:15.692 [2024-07-14 02:15:21.130440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:15.692 [2024-07-14 02:15:21.130655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.692 [2024-07-14 02:15:21.130697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f7910 with addr=10.0.0.2, port=4420 00:28:15.692 [2024-07-14 02:15:21.130715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7910 is same with the state(5) to be set 00:28:15.692 [2024-07-14 02:15:21.130863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.692 [2024-07-14 02:15:21.130916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17fa370 with addr=10.0.0.2, port=4420 00:28:15.692 [2024-07-14 02:15:21.130933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa370 is same with the state(5) to be set 00:28:15.692 [2024-07-14 02:15:21.131070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.692 [2024-07-14 02:15:21.131097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19248b0 with addr=10.0.0.2, port=4420 00:28:15.692 [2024-07-14 02:15:21.131113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19248b0 is same with the state(5) to be set 00:28:15.692 [2024-07-14 02:15:21.131140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a1290 (9): Bad file descriptor 00:28:15.692 [2024-07-14 02:15:21.131173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196ce10 (9): Bad file descriptor 00:28:15.692 [2024-07-14 02:15:21.131191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5700 (9): Bad file descriptor 00:28:15.692 [2024-07-14 02:15:21.131208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c4fd0 (9): Bad file descriptor 00:28:15.692 [2024-07-14 02:15:21.131261] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.692 [2024-07-14 02:15:21.131290] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.692 [2024-07-14 02:15:21.131311] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.692 [2024-07-14 02:15:21.131331] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.692 [2024-07-14 02:15:21.131349] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.692 [2024-07-14 02:15:21.131434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:15.692 [2024-07-14 02:15:21.131615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.692 [2024-07-14 02:15:21.131644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c1830 with addr=10.0.0.2, port=4420 00:28:15.692 [2024-07-14 02:15:21.131671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c1830 is same with the state(5) to be set 00:28:15.692 [2024-07-14 02:15:21.131813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.692 [2024-07-14 02:15:21.131840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1299610 with addr=10.0.0.2, port=4420 00:28:15.692 [2024-07-14 02:15:21.131857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299610 is same with the state(5) to be set 00:28:15.692 [2024-07-14 02:15:21.131884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f7910 (9): Bad file descriptor 00:28:15.692 [2024-07-14 02:15:21.131905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fa370 (9): Bad file descriptor 00:28:15.692 [2024-07-14 02:15:21.131923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19248b0 (9): Bad file descriptor 00:28:15.692 [2024-07-14 02:15:21.131939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:15.692 [2024-07-14 02:15:21.131953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:15.692 [2024-07-14 02:15:21.131975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:15.692 [2024-07-14 02:15:21.131996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:15.692 [2024-07-14 02:15:21.132011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:15.693 [2024-07-14 02:15:21.132024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:15.693 [2024-07-14 02:15:21.132040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:15.693 [2024-07-14 02:15:21.132053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:15.693 [2024-07-14 02:15:21.132067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:15.693 [2024-07-14 02:15:21.132083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:15.693 [2024-07-14 02:15:21.132097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:15.693 [2024-07-14 02:15:21.132110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:15.693 [2024-07-14 02:15:21.132216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.693 [2024-07-14 02:15:21.132238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.693 [2024-07-14 02:15:21.132251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.693 [2024-07-14 02:15:21.132263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.693 [2024-07-14 02:15:21.132405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.693 [2024-07-14 02:15:21.132431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ddb50 with addr=10.0.0.2, port=4420 00:28:15.693 [2024-07-14 02:15:21.132448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ddb50 is same with the state(5) to be set 00:28:15.693 [2024-07-14 02:15:21.132466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c1830 (9): Bad file descriptor 00:28:15.693 [2024-07-14 02:15:21.132485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1299610 (9): Bad file descriptor 00:28:15.693 [2024-07-14 02:15:21.132501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:15.693 [2024-07-14 02:15:21.132514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:15.693 [2024-07-14 02:15:21.132528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:15.693 [2024-07-14 02:15:21.132544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:15.693 [2024-07-14 02:15:21.132558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:15.693 [2024-07-14 02:15:21.132571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:15.693 [2024-07-14 02:15:21.132587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:15.693 [2024-07-14 02:15:21.132600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:15.693 [2024-07-14 02:15:21.132614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:15.693 [2024-07-14 02:15:21.132651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.693 [2024-07-14 02:15:21.132669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.693 [2024-07-14 02:15:21.132682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.693 [2024-07-14 02:15:21.132702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ddb50 (9): Bad file descriptor 00:28:15.693 [2024-07-14 02:15:21.132720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:15.693 [2024-07-14 02:15:21.132732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:15.693 [2024-07-14 02:15:21.132745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:15.693 [2024-07-14 02:15:21.132762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:15.693 [2024-07-14 02:15:21.132776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:15.693 [2024-07-14 02:15:21.132789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:15.693 [2024-07-14 02:15:21.132826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.693 [2024-07-14 02:15:21.132844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.693 [2024-07-14 02:15:21.132857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:15.693 [2024-07-14 02:15:21.132879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:15.693 [2024-07-14 02:15:21.132894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:15.693 [2024-07-14 02:15:21.132934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.953 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:15.953 02:15:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:17.332 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1674551 00:28:17.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1674551) - No such process 00:28:17.332 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:17.332 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:17.333 rmmod nvme_tcp 00:28:17.333 rmmod nvme_fabrics 00:28:17.333 rmmod nvme_keyring 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:17.333 02:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.243 02:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:19.243 00:28:19.243 real 0m7.197s 00:28:19.243 user 0m16.679s 00:28:19.243 sys 0m1.435s 00:28:19.243 02:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:19.243 02:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.243 ************************************ 00:28:19.243 END TEST nvmf_shutdown_tc3 00:28:19.243 ************************************ 00:28:19.243 02:15:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:19.243 02:15:24 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:19.243 00:28:19.243 real 0m27.240s 00:28:19.243 user 1m15.355s 00:28:19.243 sys 0m6.637s 00:28:19.243 02:15:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:19.243 02:15:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:19.243 ************************************ 00:28:19.243 END TEST nvmf_shutdown 00:28:19.243 ************************************ 00:28:19.243 02:15:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:19.243 02:15:24 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:19.243 02:15:24 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:19.243 02:15:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.243 02:15:24 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:19.243 02:15:24 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:19.243 02:15:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.243 02:15:24 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:19.243 02:15:24 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:19.243 02:15:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:19.243 02:15:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:19.243 02:15:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.243 ************************************ 00:28:19.243 START TEST nvmf_multicontroller 00:28:19.244 ************************************ 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:19.244 * Looking for test storage... 00:28:19.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:19.244 02:15:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:21.150 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:21.150 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:21.150 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:21.150 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.150 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:21.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:28:21.408 00:28:21.408 --- 10.0.0.2 ping statistics --- 00:28:21.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.408 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:28:21.408 00:28:21.408 --- 10.0.0.1 ping statistics --- 00:28:21.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.408 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1676960 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1676960 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1676960 ']' 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:21.408 02:15:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.408 [2024-07-14 02:15:26.936902] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:21.408 [2024-07-14 02:15:26.936973] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.408 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.408 [2024-07-14 02:15:27.004662] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:21.408 [2024-07-14 02:15:27.094260] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.409 [2024-07-14 02:15:27.094325] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.409 [2024-07-14 02:15:27.094352] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.409 [2024-07-14 02:15:27.094365] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.409 [2024-07-14 02:15:27.094377] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.409 [2024-07-14 02:15:27.094469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.409 [2024-07-14 02:15:27.094592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.409 [2024-07-14 02:15:27.094595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.668 [2024-07-14 02:15:27.240715] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.668 Malloc0 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.668 [2024-07-14 02:15:27.299222] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.668 [2024-07-14 02:15:27.307101] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.668 Malloc1 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.668 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:21.928 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.928 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.928 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.928 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1676983 00:28:21.928 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:21.928 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1676983 /var/tmp/bdevperf.sock 00:28:21.928 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1676983 ']' 00:28:21.928 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:21.928 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:21.928 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:21.928 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:21.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:21.928 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:21.928 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.187 NVMe0n1 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.187 1 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.187 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.187 request: 00:28:22.187 { 00:28:22.187 "name": "NVMe0", 00:28:22.187 "trtype": "tcp", 00:28:22.187 "traddr": "10.0.0.2", 00:28:22.187 "adrfam": "ipv4", 00:28:22.187 "trsvcid": "4420", 00:28:22.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.187 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:22.187 "hostaddr": "10.0.0.2", 00:28:22.187 "hostsvcid": "60000", 00:28:22.187 "prchk_reftag": false, 00:28:22.187 "prchk_guard": false, 00:28:22.188 "hdgst": false, 00:28:22.188 "ddgst": false, 00:28:22.188 "method": "bdev_nvme_attach_controller", 00:28:22.188 "req_id": 1 00:28:22.188 } 00:28:22.188 Got JSON-RPC error response 00:28:22.188 response: 00:28:22.188 { 00:28:22.188 "code": -114, 00:28:22.188 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:22.188 } 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.188 request: 00:28:22.188 { 00:28:22.188 "name": "NVMe0", 00:28:22.188 "trtype": "tcp", 00:28:22.188 "traddr": "10.0.0.2", 00:28:22.188 "adrfam": "ipv4", 00:28:22.188 "trsvcid": "4420", 00:28:22.188 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:22.188 "hostaddr": "10.0.0.2", 00:28:22.188 "hostsvcid": "60000", 00:28:22.188 "prchk_reftag": false, 00:28:22.188 "prchk_guard": false, 00:28:22.188 "hdgst": false, 00:28:22.188 "ddgst": false, 00:28:22.188 "method": "bdev_nvme_attach_controller", 00:28:22.188 "req_id": 1 00:28:22.188 } 00:28:22.188 Got JSON-RPC error response 00:28:22.188 response: 00:28:22.188 { 00:28:22.188 "code": -114, 00:28:22.188 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:22.188 } 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.188 request: 00:28:22.188 { 00:28:22.188 "name": "NVMe0", 00:28:22.188 "trtype": "tcp", 00:28:22.188 "traddr": "10.0.0.2", 00:28:22.188 "adrfam": "ipv4", 00:28:22.188 "trsvcid": "4420", 00:28:22.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.188 "hostaddr": "10.0.0.2", 00:28:22.188 "hostsvcid": "60000", 00:28:22.188 "prchk_reftag": false, 00:28:22.188 "prchk_guard": false, 00:28:22.188 "hdgst": false, 00:28:22.188 "ddgst": false, 00:28:22.188 "multipath": "disable", 00:28:22.188 "method": "bdev_nvme_attach_controller", 00:28:22.188 "req_id": 1 00:28:22.188 } 00:28:22.188 Got JSON-RPC error response 00:28:22.188 response: 00:28:22.188 { 00:28:22.188 "code": -114, 00:28:22.188 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:22.188 } 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.188 request: 00:28:22.188 { 00:28:22.188 "name": "NVMe0", 00:28:22.188 "trtype": "tcp", 00:28:22.188 "traddr": "10.0.0.2", 00:28:22.188 "adrfam": "ipv4", 00:28:22.188 "trsvcid": "4420", 00:28:22.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.188 "hostaddr": "10.0.0.2", 00:28:22.188 "hostsvcid": "60000", 00:28:22.188 "prchk_reftag": false, 00:28:22.188 "prchk_guard": false, 00:28:22.188 "hdgst": false, 00:28:22.188 "ddgst": false, 00:28:22.188 "multipath": "failover", 00:28:22.188 "method": "bdev_nvme_attach_controller", 00:28:22.188 "req_id": 1 00:28:22.188 } 00:28:22.188 Got JSON-RPC error response 00:28:22.188 response: 00:28:22.188 { 00:28:22.188 "code": -114, 00:28:22.188 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:22.188 } 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.188 02:15:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.447 00:28:22.447 02:15:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.447 02:15:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:22.447 02:15:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.447 02:15:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.447 02:15:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.447 02:15:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:22.447 02:15:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.447 02:15:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.705 00:28:22.705 02:15:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.705 02:15:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:22.705 02:15:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:22.705 02:15:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.705 02:15:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.705 02:15:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.705 02:15:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:22.705 02:15:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:23.641 0 00:28:23.641 02:15:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:23.641 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.641 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.641 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.641 02:15:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1676983 00:28:23.641 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1676983 ']' 00:28:23.641 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1676983 00:28:23.641 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:23.641 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:23.641 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1676983 00:28:23.901 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:23.901 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:23.901 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1676983' 00:28:23.901 killing process with pid 1676983 00:28:23.901 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1676983 00:28:23.901 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1676983 00:28:23.901 02:15:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:23.901 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.901 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.901 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.901 02:15:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:23.901 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.901 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:28:24.158 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:24.158 [2024-07-14 02:15:27.412760] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:24.158 [2024-07-14 02:15:27.412856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1676983 ] 00:28:24.158 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.158 [2024-07-14 02:15:27.473653] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.158 [2024-07-14 02:15:27.559515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.158 [2024-07-14 02:15:28.158938] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name e0b7149a-d777-448d-bbcc-1a471ce06d8e already exists 00:28:24.158 [2024-07-14 02:15:28.158979] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:e0b7149a-d777-448d-bbcc-1a471ce06d8e alias for bdev NVMe1n1 00:28:24.158 [2024-07-14 02:15:28.158994] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:24.158 Running I/O for 1 seconds... 00:28:24.158 00:28:24.158 Latency(us) 00:28:24.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.158 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:24.158 NVMe0n1 : 1.00 18880.00 73.75 0.00 0.00 6762.03 4320.52 15825.73 00:28:24.158 =================================================================================================================== 00:28:24.158 Total : 18880.00 73.75 0.00 0.00 6762.03 4320.52 15825.73 00:28:24.158 Received shutdown signal, test time was about 1.000000 seconds 00:28:24.158 00:28:24.158 Latency(us) 00:28:24.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.158 =================================================================================================================== 00:28:24.158 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.158 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:24.158 rmmod nvme_tcp 00:28:24.158 rmmod nvme_fabrics 00:28:24.158 rmmod nvme_keyring 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1676960 ']' 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1676960 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1676960 ']' 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1676960 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1676960 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1676960' 00:28:24.158 killing process with pid 1676960 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1676960 00:28:24.158 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1676960 00:28:24.415 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:24.415 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:24.415 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:24.415 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:24.415 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:24.415 02:15:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.415 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:24.415 02:15:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.318 02:15:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:26.318 00:28:26.318 real 0m7.218s 00:28:26.318 user 0m11.418s 00:28:26.318 sys 0m2.229s 00:28:26.318 02:15:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:26.318 02:15:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.318 ************************************ 00:28:26.318 END TEST nvmf_multicontroller 00:28:26.318 ************************************ 00:28:26.577 02:15:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:26.577 02:15:32 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:26.577 02:15:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:26.577 02:15:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:26.577 02:15:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:26.577 ************************************ 00:28:26.577 START TEST nvmf_aer 00:28:26.577 ************************************ 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:26.577 * Looking for test storage... 00:28:26.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:26.577 02:15:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.479 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:28.479 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:28.479 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:28.480 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:28.480 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:28.480 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:28.480 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.480 02:15:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:28.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:28:28.480 00:28:28.480 --- 10.0.0.2 ping statistics --- 00:28:28.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.480 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:28:28.480 00:28:28.480 --- 10.0.0.1 ping statistics --- 00:28:28.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.480 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1679193 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1679193 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1679193 ']' 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:28.480 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.739 [2024-07-14 02:15:34.179662] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:28.739 [2024-07-14 02:15:34.179756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.739 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.739 [2024-07-14 02:15:34.249593] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:28.739 [2024-07-14 02:15:34.342572] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.739 [2024-07-14 02:15:34.342633] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.739 [2024-07-14 02:15:34.342657] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.739 [2024-07-14 02:15:34.342670] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.739 [2024-07-14 02:15:34.342682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.739 [2024-07-14 02:15:34.343891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.739 [2024-07-14 02:15:34.343938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:28.739 [2024-07-14 02:15:34.344022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.739 [2024-07-14 02:15:34.344018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.997 [2024-07-14 02:15:34.487641] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.997 Malloc0 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.997 [2024-07-14 02:15:34.539193] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.997 [ 00:28:28.997 { 00:28:28.997 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:28.997 "subtype": "Discovery", 00:28:28.997 "listen_addresses": [], 00:28:28.997 "allow_any_host": true, 00:28:28.997 "hosts": [] 00:28:28.997 }, 00:28:28.997 { 00:28:28.997 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:28.997 "subtype": "NVMe", 00:28:28.997 "listen_addresses": [ 00:28:28.997 { 00:28:28.997 "trtype": "TCP", 00:28:28.997 "adrfam": "IPv4", 00:28:28.997 "traddr": "10.0.0.2", 00:28:28.997 "trsvcid": "4420" 00:28:28.997 } 00:28:28.997 ], 00:28:28.997 "allow_any_host": true, 00:28:28.997 "hosts": [], 00:28:28.997 "serial_number": "SPDK00000000000001", 00:28:28.997 "model_number": "SPDK bdev Controller", 00:28:28.997 "max_namespaces": 2, 00:28:28.997 "min_cntlid": 1, 00:28:28.997 "max_cntlid": 65519, 00:28:28.997 "namespaces": [ 00:28:28.997 { 00:28:28.997 "nsid": 1, 00:28:28.997 "bdev_name": "Malloc0", 00:28:28.997 "name": "Malloc0", 00:28:28.997 "nguid": "A1CC08E0C0EF494FBC3B73325C65BF62", 00:28:28.997 "uuid": "a1cc08e0-c0ef-494f-bc3b-73325c65bf62" 00:28:28.997 } 00:28:28.997 ] 00:28:28.997 } 00:28:28.997 ] 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1679336 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:28.997 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:28.997 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.257 Malloc1 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.257 [ 00:28:29.257 { 00:28:29.257 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:29.257 "subtype": "Discovery", 00:28:29.257 "listen_addresses": [], 00:28:29.257 "allow_any_host": true, 00:28:29.257 "hosts": [] 00:28:29.257 }, 00:28:29.257 { 00:28:29.257 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:29.257 "subtype": "NVMe", 00:28:29.257 "listen_addresses": [ 00:28:29.257 { 00:28:29.257 "trtype": "TCP", 00:28:29.257 "adrfam": "IPv4", 00:28:29.257 "traddr": "10.0.0.2", 00:28:29.257 "trsvcid": "4420" 00:28:29.257 } 00:28:29.257 ], 00:28:29.257 "allow_any_host": true, 00:28:29.257 "hosts": [], 00:28:29.257 "serial_number": "SPDK00000000000001", 00:28:29.257 "model_number": "SPDK bdev Controller", 00:28:29.257 "max_namespaces": 2, 00:28:29.257 "min_cntlid": 1, 00:28:29.257 "max_cntlid": 65519, 00:28:29.257 "namespaces": [ 00:28:29.257 { 00:28:29.257 "nsid": 1, 00:28:29.257 "bdev_name": "Malloc0", 00:28:29.257 "name": "Malloc0", 00:28:29.257 "nguid": "A1CC08E0C0EF494FBC3B73325C65BF62", 00:28:29.257 "uuid": "a1cc08e0-c0ef-494f-bc3b-73325c65bf62" 00:28:29.257 }, 00:28:29.257 { 00:28:29.257 "nsid": 2, 00:28:29.257 "bdev_name": "Malloc1", 00:28:29.257 "name": "Malloc1", 00:28:29.257 "nguid": "4C03F2AD603546CE9D3DEC134FBA1A9A", 00:28:29.257 "uuid": "4c03f2ad-6035-46ce-9d3d-ec134fba1a9a" 00:28:29.257 } 00:28:29.257 ] 00:28:29.257 } 00:28:29.257 ] 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.257 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1679336 00:28:29.518 Asynchronous Event Request test 00:28:29.518 Attaching to 10.0.0.2 00:28:29.518 Attached to 10.0.0.2 00:28:29.518 Registering asynchronous event callbacks... 00:28:29.518 Starting namespace attribute notice tests for all controllers... 00:28:29.518 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:29.518 aer_cb - Changed Namespace 00:28:29.518 Cleaning up... 00:28:29.518 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:29.518 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.518 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.518 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.518 02:15:34 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:29.518 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.518 02:15:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.518 02:15:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.518 02:15:35 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:29.518 02:15:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.518 02:15:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.518 02:15:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.518 02:15:35 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:29.519 rmmod nvme_tcp 00:28:29.519 rmmod nvme_fabrics 00:28:29.519 rmmod nvme_keyring 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1679193 ']' 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1679193 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1679193 ']' 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1679193 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1679193 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1679193' 00:28:29.519 killing process with pid 1679193 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1679193 00:28:29.519 02:15:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1679193 00:28:29.779 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:29.779 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:29.779 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:29.779 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:29.779 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:29.779 02:15:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.779 02:15:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:29.779 02:15:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.314 02:15:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:32.314 00:28:32.314 real 0m5.345s 00:28:32.314 user 0m4.601s 00:28:32.314 sys 0m1.819s 00:28:32.314 02:15:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:32.314 02:15:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:32.314 ************************************ 00:28:32.314 END TEST nvmf_aer 00:28:32.315 ************************************ 00:28:32.315 02:15:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:32.315 02:15:37 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:32.315 02:15:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:32.315 02:15:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:32.315 02:15:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:32.315 ************************************ 00:28:32.315 START TEST nvmf_async_init 00:28:32.315 ************************************ 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:32.315 * Looking for test storage... 00:28:32.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=44fdde0c01a945d4aebe3483d99ad4d1 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:32.315 02:15:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:33.693 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:33.693 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:33.693 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:33.693 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.693 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:33.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:28:33.953 00:28:33.953 --- 10.0.0.2 ping statistics --- 00:28:33.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.953 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:28:33.953 00:28:33.953 --- 10.0.0.1 ping statistics --- 00:28:33.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.953 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1681268 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1681268 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1681268 ']' 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:33.953 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.953 [2024-07-14 02:15:39.543177] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:33.953 [2024-07-14 02:15:39.543285] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.953 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.953 [2024-07-14 02:15:39.613356] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.212 [2024-07-14 02:15:39.703033] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.212 [2024-07-14 02:15:39.703095] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.212 [2024-07-14 02:15:39.703128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.212 [2024-07-14 02:15:39.703143] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.212 [2024-07-14 02:15:39.703155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.212 [2024-07-14 02:15:39.703195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.212 [2024-07-14 02:15:39.856455] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.212 null0 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 44fdde0c01a945d4aebe3483d99ad4d1 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.212 [2024-07-14 02:15:39.896690] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.212 02:15:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.471 nvme0n1 00:28:34.471 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.471 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:34.471 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.471 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.471 [ 00:28:34.471 { 00:28:34.471 "name": "nvme0n1", 00:28:34.471 "aliases": [ 00:28:34.471 "44fdde0c-01a9-45d4-aebe-3483d99ad4d1" 00:28:34.471 ], 00:28:34.471 "product_name": "NVMe disk", 00:28:34.471 "block_size": 512, 00:28:34.471 "num_blocks": 2097152, 00:28:34.471 "uuid": "44fdde0c-01a9-45d4-aebe-3483d99ad4d1", 00:28:34.471 "assigned_rate_limits": { 00:28:34.471 "rw_ios_per_sec": 0, 00:28:34.471 "rw_mbytes_per_sec": 0, 00:28:34.471 "r_mbytes_per_sec": 0, 00:28:34.471 "w_mbytes_per_sec": 0 00:28:34.471 }, 00:28:34.471 "claimed": false, 00:28:34.471 "zoned": false, 00:28:34.471 "supported_io_types": { 00:28:34.471 "read": true, 00:28:34.471 "write": true, 00:28:34.471 "unmap": false, 00:28:34.471 "flush": true, 00:28:34.471 "reset": true, 00:28:34.471 "nvme_admin": true, 00:28:34.471 "nvme_io": true, 00:28:34.471 "nvme_io_md": false, 00:28:34.471 "write_zeroes": true, 00:28:34.471 "zcopy": false, 00:28:34.471 "get_zone_info": false, 00:28:34.471 "zone_management": false, 00:28:34.471 "zone_append": false, 00:28:34.471 "compare": true, 00:28:34.471 "compare_and_write": true, 00:28:34.471 "abort": true, 00:28:34.471 "seek_hole": false, 00:28:34.471 "seek_data": false, 00:28:34.471 "copy": true, 00:28:34.471 "nvme_iov_md": false 00:28:34.471 }, 00:28:34.471 "memory_domains": [ 00:28:34.471 { 00:28:34.471 "dma_device_id": "system", 00:28:34.471 "dma_device_type": 1 00:28:34.471 } 00:28:34.471 ], 00:28:34.471 "driver_specific": { 00:28:34.471 "nvme": [ 00:28:34.471 { 00:28:34.471 "trid": { 00:28:34.471 "trtype": "TCP", 00:28:34.471 "adrfam": "IPv4", 00:28:34.471 "traddr": "10.0.0.2", 00:28:34.471 "trsvcid": "4420", 00:28:34.471 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:34.471 }, 00:28:34.471 "ctrlr_data": { 00:28:34.471 "cntlid": 1, 00:28:34.471 "vendor_id": "0x8086", 00:28:34.471 "model_number": "SPDK bdev Controller", 00:28:34.471 "serial_number": "00000000000000000000", 00:28:34.471 "firmware_revision": "24.09", 00:28:34.471 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:34.471 "oacs": { 00:28:34.471 "security": 0, 00:28:34.471 "format": 0, 00:28:34.471 "firmware": 0, 00:28:34.472 "ns_manage": 0 00:28:34.472 }, 00:28:34.472 "multi_ctrlr": true, 00:28:34.472 "ana_reporting": false 00:28:34.472 }, 00:28:34.472 "vs": { 00:28:34.472 "nvme_version": "1.3" 00:28:34.472 }, 00:28:34.472 "ns_data": { 00:28:34.472 "id": 1, 00:28:34.472 "can_share": true 00:28:34.472 } 00:28:34.472 } 00:28:34.472 ], 00:28:34.472 "mp_policy": "active_passive" 00:28:34.472 } 00:28:34.472 } 00:28:34.472 ] 00:28:34.472 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.472 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:34.472 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.472 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.472 [2024-07-14 02:15:40.150096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:34.472 [2024-07-14 02:15:40.150205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x791500 (9): Bad file descriptor 00:28:34.734 [2024-07-14 02:15:40.323036] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:34.734 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.734 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.735 [ 00:28:34.735 { 00:28:34.735 "name": "nvme0n1", 00:28:34.735 "aliases": [ 00:28:34.735 "44fdde0c-01a9-45d4-aebe-3483d99ad4d1" 00:28:34.735 ], 00:28:34.735 "product_name": "NVMe disk", 00:28:34.735 "block_size": 512, 00:28:34.735 "num_blocks": 2097152, 00:28:34.735 "uuid": "44fdde0c-01a9-45d4-aebe-3483d99ad4d1", 00:28:34.735 "assigned_rate_limits": { 00:28:34.735 "rw_ios_per_sec": 0, 00:28:34.735 "rw_mbytes_per_sec": 0, 00:28:34.735 "r_mbytes_per_sec": 0, 00:28:34.735 "w_mbytes_per_sec": 0 00:28:34.735 }, 00:28:34.735 "claimed": false, 00:28:34.735 "zoned": false, 00:28:34.735 "supported_io_types": { 00:28:34.735 "read": true, 00:28:34.735 "write": true, 00:28:34.735 "unmap": false, 00:28:34.735 "flush": true, 00:28:34.735 "reset": true, 00:28:34.735 "nvme_admin": true, 00:28:34.735 "nvme_io": true, 00:28:34.735 "nvme_io_md": false, 00:28:34.735 "write_zeroes": true, 00:28:34.735 "zcopy": false, 00:28:34.735 "get_zone_info": false, 00:28:34.735 "zone_management": false, 00:28:34.735 "zone_append": false, 00:28:34.735 "compare": true, 00:28:34.735 "compare_and_write": true, 00:28:34.735 "abort": true, 00:28:34.735 "seek_hole": false, 00:28:34.735 "seek_data": false, 00:28:34.735 "copy": true, 00:28:34.735 "nvme_iov_md": false 00:28:34.735 }, 00:28:34.735 "memory_domains": [ 00:28:34.735 { 00:28:34.735 "dma_device_id": "system", 00:28:34.735 "dma_device_type": 1 00:28:34.735 } 00:28:34.735 ], 00:28:34.735 "driver_specific": { 00:28:34.735 "nvme": [ 00:28:34.735 { 00:28:34.735 "trid": { 00:28:34.735 "trtype": "TCP", 00:28:34.735 "adrfam": "IPv4", 00:28:34.735 "traddr": "10.0.0.2", 00:28:34.735 "trsvcid": "4420", 00:28:34.735 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:34.735 }, 00:28:34.735 "ctrlr_data": { 00:28:34.735 "cntlid": 2, 00:28:34.735 "vendor_id": "0x8086", 00:28:34.735 "model_number": "SPDK bdev Controller", 00:28:34.735 "serial_number": "00000000000000000000", 00:28:34.735 "firmware_revision": "24.09", 00:28:34.735 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:34.735 "oacs": { 00:28:34.735 "security": 0, 00:28:34.735 "format": 0, 00:28:34.735 "firmware": 0, 00:28:34.735 "ns_manage": 0 00:28:34.735 }, 00:28:34.735 "multi_ctrlr": true, 00:28:34.735 "ana_reporting": false 00:28:34.735 }, 00:28:34.735 "vs": { 00:28:34.735 "nvme_version": "1.3" 00:28:34.735 }, 00:28:34.735 "ns_data": { 00:28:34.735 "id": 1, 00:28:34.735 "can_share": true 00:28:34.735 } 00:28:34.735 } 00:28:34.735 ], 00:28:34.735 "mp_policy": "active_passive" 00:28:34.735 } 00:28:34.735 } 00:28:34.735 ] 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.EMvs307dup 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.EMvs307dup 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.735 [2024-07-14 02:15:40.378873] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:34.735 [2024-07-14 02:15:40.379077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EMvs307dup 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.735 [2024-07-14 02:15:40.386889] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EMvs307dup 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.735 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.735 [2024-07-14 02:15:40.394927] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:34.735 [2024-07-14 02:15:40.394998] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:35.020 nvme0n1 00:28:35.020 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.020 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:35.020 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.020 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.020 [ 00:28:35.020 { 00:28:35.020 "name": "nvme0n1", 00:28:35.020 "aliases": [ 00:28:35.020 "44fdde0c-01a9-45d4-aebe-3483d99ad4d1" 00:28:35.020 ], 00:28:35.020 "product_name": "NVMe disk", 00:28:35.020 "block_size": 512, 00:28:35.020 "num_blocks": 2097152, 00:28:35.020 "uuid": "44fdde0c-01a9-45d4-aebe-3483d99ad4d1", 00:28:35.020 "assigned_rate_limits": { 00:28:35.020 "rw_ios_per_sec": 0, 00:28:35.020 "rw_mbytes_per_sec": 0, 00:28:35.020 "r_mbytes_per_sec": 0, 00:28:35.020 "w_mbytes_per_sec": 0 00:28:35.020 }, 00:28:35.020 "claimed": false, 00:28:35.020 "zoned": false, 00:28:35.020 "supported_io_types": { 00:28:35.020 "read": true, 00:28:35.020 "write": true, 00:28:35.020 "unmap": false, 00:28:35.020 "flush": true, 00:28:35.020 "reset": true, 00:28:35.020 "nvme_admin": true, 00:28:35.020 "nvme_io": true, 00:28:35.020 "nvme_io_md": false, 00:28:35.020 "write_zeroes": true, 00:28:35.020 "zcopy": false, 00:28:35.020 "get_zone_info": false, 00:28:35.020 "zone_management": false, 00:28:35.020 "zone_append": false, 00:28:35.020 "compare": true, 00:28:35.020 "compare_and_write": true, 00:28:35.020 "abort": true, 00:28:35.020 "seek_hole": false, 00:28:35.020 "seek_data": false, 00:28:35.020 "copy": true, 00:28:35.020 "nvme_iov_md": false 00:28:35.020 }, 00:28:35.020 "memory_domains": [ 00:28:35.020 { 00:28:35.020 "dma_device_id": "system", 00:28:35.020 "dma_device_type": 1 00:28:35.020 } 00:28:35.020 ], 00:28:35.020 "driver_specific": { 00:28:35.020 "nvme": [ 00:28:35.020 { 00:28:35.020 "trid": { 00:28:35.020 "trtype": "TCP", 00:28:35.020 "adrfam": "IPv4", 00:28:35.020 "traddr": "10.0.0.2", 00:28:35.020 "trsvcid": "4421", 00:28:35.020 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:35.020 }, 00:28:35.020 "ctrlr_data": { 00:28:35.020 "cntlid": 3, 00:28:35.020 "vendor_id": "0x8086", 00:28:35.020 "model_number": "SPDK bdev Controller", 00:28:35.020 "serial_number": "00000000000000000000", 00:28:35.020 "firmware_revision": "24.09", 00:28:35.020 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.020 "oacs": { 00:28:35.020 "security": 0, 00:28:35.020 "format": 0, 00:28:35.020 "firmware": 0, 00:28:35.020 "ns_manage": 0 00:28:35.020 }, 00:28:35.020 "multi_ctrlr": true, 00:28:35.020 "ana_reporting": false 00:28:35.020 }, 00:28:35.020 "vs": { 00:28:35.020 "nvme_version": "1.3" 00:28:35.021 }, 00:28:35.021 "ns_data": { 00:28:35.021 "id": 1, 00:28:35.021 "can_share": true 00:28:35.021 } 00:28:35.021 } 00:28:35.021 ], 00:28:35.021 "mp_policy": "active_passive" 00:28:35.021 } 00:28:35.021 } 00:28:35.021 ] 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.EMvs307dup 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:35.021 rmmod nvme_tcp 00:28:35.021 rmmod nvme_fabrics 00:28:35.021 rmmod nvme_keyring 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1681268 ']' 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1681268 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1681268 ']' 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1681268 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1681268 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1681268' 00:28:35.021 killing process with pid 1681268 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1681268 00:28:35.021 [2024-07-14 02:15:40.580085] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:35.021 [2024-07-14 02:15:40.580133] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:35.021 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1681268 00:28:35.283 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:35.283 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:35.283 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:35.283 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:35.283 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:35.283 02:15:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.283 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:35.283 02:15:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.192 02:15:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:37.192 00:28:37.192 real 0m5.378s 00:28:37.192 user 0m1.995s 00:28:37.192 sys 0m1.756s 00:28:37.192 02:15:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:37.192 02:15:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:37.192 ************************************ 00:28:37.192 END TEST nvmf_async_init 00:28:37.192 ************************************ 00:28:37.192 02:15:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:37.192 02:15:42 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:37.192 02:15:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:37.192 02:15:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:37.192 02:15:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:37.192 ************************************ 00:28:37.192 START TEST dma 00:28:37.192 ************************************ 00:28:37.192 02:15:42 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:37.452 * Looking for test storage... 00:28:37.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:37.452 02:15:42 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.452 02:15:42 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.452 02:15:42 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.452 02:15:42 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.452 02:15:42 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.452 02:15:42 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.452 02:15:42 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.452 02:15:42 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:37.452 02:15:42 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:37.452 02:15:42 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:37.452 02:15:42 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:37.452 02:15:42 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:37.452 00:28:37.452 real 0m0.074s 00:28:37.452 user 0m0.035s 00:28:37.452 sys 0m0.045s 00:28:37.452 02:15:42 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:37.452 02:15:42 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:37.452 ************************************ 00:28:37.452 END TEST dma 00:28:37.452 ************************************ 00:28:37.452 02:15:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:37.452 02:15:42 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:37.452 02:15:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:37.452 02:15:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:37.452 02:15:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:37.452 ************************************ 00:28:37.452 START TEST nvmf_identify 00:28:37.452 ************************************ 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:37.452 * Looking for test storage... 00:28:37.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.452 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:37.453 02:15:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.358 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:39.358 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:39.617 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:39.617 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:39.617 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:39.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:28:39.617 00:28:39.617 --- 10.0.0.2 ping statistics --- 00:28:39.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.617 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:28:39.617 00:28:39.617 --- 10.0.0.1 ping statistics --- 00:28:39.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.617 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.617 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1683394 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1683394 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1683394 ']' 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:39.618 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:39.618 [2024-07-14 02:15:45.263793] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:39.618 [2024-07-14 02:15:45.263877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.618 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.877 [2024-07-14 02:15:45.335555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:39.877 [2024-07-14 02:15:45.427405] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.877 [2024-07-14 02:15:45.427469] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.877 [2024-07-14 02:15:45.427486] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.877 [2024-07-14 02:15:45.427500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.877 [2024-07-14 02:15:45.427511] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.877 [2024-07-14 02:15:45.427591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.877 [2024-07-14 02:15:45.427639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.877 [2024-07-14 02:15:45.427730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.877 [2024-07-14 02:15:45.427733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.877 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:39.877 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:28:39.877 02:15:45 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:39.877 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.877 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:39.877 [2024-07-14 02:15:45.561711] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.138 Malloc0 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.138 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.139 [2024-07-14 02:15:45.643312] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.139 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.139 02:15:45 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:40.139 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.139 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.139 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.139 02:15:45 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:40.139 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.139 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.139 [ 00:28:40.139 { 00:28:40.139 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:40.139 "subtype": "Discovery", 00:28:40.139 "listen_addresses": [ 00:28:40.139 { 00:28:40.139 "trtype": "TCP", 00:28:40.139 "adrfam": "IPv4", 00:28:40.139 "traddr": "10.0.0.2", 00:28:40.139 "trsvcid": "4420" 00:28:40.139 } 00:28:40.139 ], 00:28:40.139 "allow_any_host": true, 00:28:40.139 "hosts": [] 00:28:40.139 }, 00:28:40.139 { 00:28:40.139 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:40.139 "subtype": "NVMe", 00:28:40.139 "listen_addresses": [ 00:28:40.139 { 00:28:40.139 "trtype": "TCP", 00:28:40.139 "adrfam": "IPv4", 00:28:40.139 "traddr": "10.0.0.2", 00:28:40.139 "trsvcid": "4420" 00:28:40.139 } 00:28:40.139 ], 00:28:40.139 "allow_any_host": true, 00:28:40.139 "hosts": [], 00:28:40.139 "serial_number": "SPDK00000000000001", 00:28:40.139 "model_number": "SPDK bdev Controller", 00:28:40.139 "max_namespaces": 32, 00:28:40.139 "min_cntlid": 1, 00:28:40.139 "max_cntlid": 65519, 00:28:40.139 "namespaces": [ 00:28:40.139 { 00:28:40.139 "nsid": 1, 00:28:40.139 "bdev_name": "Malloc0", 00:28:40.139 "name": "Malloc0", 00:28:40.139 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:40.139 "eui64": "ABCDEF0123456789", 00:28:40.139 "uuid": "3c46b965-058c-45d0-9d88-33d0acc11e4c" 00:28:40.139 } 00:28:40.139 ] 00:28:40.139 } 00:28:40.139 ] 00:28:40.139 02:15:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.139 02:15:45 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:40.139 [2024-07-14 02:15:45.685637] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:40.139 [2024-07-14 02:15:45.685682] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1683416 ] 00:28:40.139 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.139 [2024-07-14 02:15:45.721305] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:40.139 [2024-07-14 02:15:45.721379] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:40.139 [2024-07-14 02:15:45.721389] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:40.139 [2024-07-14 02:15:45.721403] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:40.139 [2024-07-14 02:15:45.721413] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:40.139 [2024-07-14 02:15:45.724944] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:40.139 [2024-07-14 02:15:45.725011] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22f3fe0 0 00:28:40.139 [2024-07-14 02:15:45.732878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:40.139 [2024-07-14 02:15:45.732902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:40.139 [2024-07-14 02:15:45.732916] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:40.139 [2024-07-14 02:15:45.732923] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:40.139 [2024-07-14 02:15:45.732981] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.732995] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.733004] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f3fe0) 00:28:40.139 [2024-07-14 02:15:45.733024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:40.139 [2024-07-14 02:15:45.733051] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235a880, cid 0, qid 0 00:28:40.139 [2024-07-14 02:15:45.740882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.139 [2024-07-14 02:15:45.740901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.139 [2024-07-14 02:15:45.740908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.740916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235a880) on tqpair=0x22f3fe0 00:28:40.139 [2024-07-14 02:15:45.740938] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:40.139 [2024-07-14 02:15:45.740951] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:40.139 [2024-07-14 02:15:45.740960] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:40.139 [2024-07-14 02:15:45.740984] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.740993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.740999] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f3fe0) 00:28:40.139 [2024-07-14 02:15:45.741010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.139 [2024-07-14 02:15:45.741034] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235a880, cid 0, qid 0 00:28:40.139 [2024-07-14 02:15:45.741295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.139 [2024-07-14 02:15:45.741311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.139 [2024-07-14 02:15:45.741318] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.741325] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235a880) on tqpair=0x22f3fe0 00:28:40.139 [2024-07-14 02:15:45.741334] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:40.139 [2024-07-14 02:15:45.741362] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:40.139 [2024-07-14 02:15:45.741374] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.741381] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.741387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f3fe0) 00:28:40.139 [2024-07-14 02:15:45.741397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.139 [2024-07-14 02:15:45.741418] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235a880, cid 0, qid 0 00:28:40.139 [2024-07-14 02:15:45.741625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.139 [2024-07-14 02:15:45.741637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.139 [2024-07-14 02:15:45.741643] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.741650] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235a880) on tqpair=0x22f3fe0 00:28:40.139 [2024-07-14 02:15:45.741659] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:40.139 [2024-07-14 02:15:45.741678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:40.139 [2024-07-14 02:15:45.741691] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.741698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.741704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f3fe0) 00:28:40.139 [2024-07-14 02:15:45.741714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.139 [2024-07-14 02:15:45.741749] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235a880, cid 0, qid 0 00:28:40.139 [2024-07-14 02:15:45.742045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.139 [2024-07-14 02:15:45.742062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.139 [2024-07-14 02:15:45.742069] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.742075] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235a880) on tqpair=0x22f3fe0 00:28:40.139 [2024-07-14 02:15:45.742086] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:40.139 [2024-07-14 02:15:45.742103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.742112] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.742119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f3fe0) 00:28:40.139 [2024-07-14 02:15:45.742129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.139 [2024-07-14 02:15:45.742151] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235a880, cid 0, qid 0 00:28:40.139 [2024-07-14 02:15:45.742337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.139 [2024-07-14 02:15:45.742353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.139 [2024-07-14 02:15:45.742359] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.742366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235a880) on tqpair=0x22f3fe0 00:28:40.139 [2024-07-14 02:15:45.742375] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:40.139 [2024-07-14 02:15:45.742384] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:40.139 [2024-07-14 02:15:45.742397] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:40.139 [2024-07-14 02:15:45.742508] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:40.139 [2024-07-14 02:15:45.742517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:40.139 [2024-07-14 02:15:45.742532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.742539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.139 [2024-07-14 02:15:45.742545] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f3fe0) 00:28:40.139 [2024-07-14 02:15:45.742555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.139 [2024-07-14 02:15:45.742576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235a880, cid 0, qid 0 00:28:40.139 [2024-07-14 02:15:45.742782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.140 [2024-07-14 02:15:45.742797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.140 [2024-07-14 02:15:45.742803] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.742814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235a880) on tqpair=0x22f3fe0 00:28:40.140 [2024-07-14 02:15:45.742823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:40.140 [2024-07-14 02:15:45.742840] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.742848] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.742854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f3fe0) 00:28:40.140 [2024-07-14 02:15:45.742873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.140 [2024-07-14 02:15:45.742916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235a880, cid 0, qid 0 00:28:40.140 [2024-07-14 02:15:45.743110] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.140 [2024-07-14 02:15:45.743125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.140 [2024-07-14 02:15:45.743132] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.743139] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235a880) on tqpair=0x22f3fe0 00:28:40.140 [2024-07-14 02:15:45.743147] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:40.140 [2024-07-14 02:15:45.743155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:40.140 [2024-07-14 02:15:45.743170] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:40.140 [2024-07-14 02:15:45.743191] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:40.140 [2024-07-14 02:15:45.743209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.743216] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f3fe0) 00:28:40.140 [2024-07-14 02:15:45.743241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.140 [2024-07-14 02:15:45.743263] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235a880, cid 0, qid 0 00:28:40.140 [2024-07-14 02:15:45.743526] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.140 [2024-07-14 02:15:45.743542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.140 [2024-07-14 02:15:45.743549] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.743556] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22f3fe0): datao=0, datal=4096, cccid=0 00:28:40.140 [2024-07-14 02:15:45.743564] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x235a880) on tqpair(0x22f3fe0): expected_datao=0, payload_size=4096 00:28:40.140 [2024-07-14 02:15:45.743572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.743603] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.743613] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.784068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.140 [2024-07-14 02:15:45.784086] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.140 [2024-07-14 02:15:45.784094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.784101] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235a880) on tqpair=0x22f3fe0 00:28:40.140 [2024-07-14 02:15:45.784115] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:40.140 [2024-07-14 02:15:45.784130] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:40.140 [2024-07-14 02:15:45.784142] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:40.140 [2024-07-14 02:15:45.784152] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:40.140 [2024-07-14 02:15:45.784176] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:40.140 [2024-07-14 02:15:45.784184] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:40.140 [2024-07-14 02:15:45.784200] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:40.140 [2024-07-14 02:15:45.784214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.784221] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.784228] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f3fe0) 00:28:40.140 [2024-07-14 02:15:45.784239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:40.140 [2024-07-14 02:15:45.784261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235a880, cid 0, qid 0 00:28:40.140 [2024-07-14 02:15:45.787891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.140 [2024-07-14 02:15:45.787908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.140 [2024-07-14 02:15:45.787915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.787922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235a880) on tqpair=0x22f3fe0 00:28:40.140 [2024-07-14 02:15:45.787936] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.787944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.787950] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f3fe0) 00:28:40.140 [2024-07-14 02:15:45.787960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.140 [2024-07-14 02:15:45.787971] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.787977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.787984] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22f3fe0) 00:28:40.140 [2024-07-14 02:15:45.787992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.140 [2024-07-14 02:15:45.788001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.788008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.788014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22f3fe0) 00:28:40.140 [2024-07-14 02:15:45.788022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.140 [2024-07-14 02:15:45.788032] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.788038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.788044] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f3fe0) 00:28:40.140 [2024-07-14 02:15:45.788053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.140 [2024-07-14 02:15:45.788062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:40.140 [2024-07-14 02:15:45.788082] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:40.140 [2024-07-14 02:15:45.788098] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.788106] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22f3fe0) 00:28:40.140 [2024-07-14 02:15:45.788116] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.140 [2024-07-14 02:15:45.788139] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235a880, cid 0, qid 0 00:28:40.140 [2024-07-14 02:15:45.788166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235aa00, cid 1, qid 0 00:28:40.140 [2024-07-14 02:15:45.788174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235ab80, cid 2, qid 0 00:28:40.140 [2024-07-14 02:15:45.788182] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235ad00, cid 3, qid 0 00:28:40.140 [2024-07-14 02:15:45.788190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235ae80, cid 4, qid 0 00:28:40.140 [2024-07-14 02:15:45.788409] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.140 [2024-07-14 02:15:45.788424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.140 [2024-07-14 02:15:45.788431] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.788438] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235ae80) on tqpair=0x22f3fe0 00:28:40.140 [2024-07-14 02:15:45.788448] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:40.140 [2024-07-14 02:15:45.788472] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:40.140 [2024-07-14 02:15:45.788489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.788498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22f3fe0) 00:28:40.140 [2024-07-14 02:15:45.788508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.140 [2024-07-14 02:15:45.788528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235ae80, cid 4, qid 0 00:28:40.140 [2024-07-14 02:15:45.788720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.140 [2024-07-14 02:15:45.788732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.140 [2024-07-14 02:15:45.788738] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.788745] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22f3fe0): datao=0, datal=4096, cccid=4 00:28:40.140 [2024-07-14 02:15:45.788752] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x235ae80) on tqpair(0x22f3fe0): expected_datao=0, payload_size=4096 00:28:40.140 [2024-07-14 02:15:45.788759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.788794] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.788819] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.788932] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.140 [2024-07-14 02:15:45.788946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.140 [2024-07-14 02:15:45.788953] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.788960] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235ae80) on tqpair=0x22f3fe0 00:28:40.140 [2024-07-14 02:15:45.788979] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:40.140 [2024-07-14 02:15:45.789019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.789030] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22f3fe0) 00:28:40.140 [2024-07-14 02:15:45.789040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.140 [2024-07-14 02:15:45.789058] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.789067] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.140 [2024-07-14 02:15:45.789073] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22f3fe0) 00:28:40.141 [2024-07-14 02:15:45.789082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.141 [2024-07-14 02:15:45.789110] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235ae80, cid 4, qid 0 00:28:40.141 [2024-07-14 02:15:45.789122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235b000, cid 5, qid 0 00:28:40.141 [2024-07-14 02:15:45.789355] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.141 [2024-07-14 02:15:45.789368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.141 [2024-07-14 02:15:45.789375] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.141 [2024-07-14 02:15:45.789381] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22f3fe0): datao=0, datal=1024, cccid=4 00:28:40.141 [2024-07-14 02:15:45.789389] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x235ae80) on tqpair(0x22f3fe0): expected_datao=0, payload_size=1024 00:28:40.141 [2024-07-14 02:15:45.789396] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.141 [2024-07-14 02:15:45.789405] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.141 [2024-07-14 02:15:45.789412] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.141 [2024-07-14 02:15:45.789435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.141 [2024-07-14 02:15:45.789444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.141 [2024-07-14 02:15:45.789450] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.141 [2024-07-14 02:15:45.789456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235b000) on tqpair=0x22f3fe0 00:28:40.401 [2024-07-14 02:15:45.831031] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.401 [2024-07-14 02:15:45.831052] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.401 [2024-07-14 02:15:45.831060] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.831067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235ae80) on tqpair=0x22f3fe0 00:28:40.401 [2024-07-14 02:15:45.831087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.831097] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22f3fe0) 00:28:40.401 [2024-07-14 02:15:45.831109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.401 [2024-07-14 02:15:45.831148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235ae80, cid 4, qid 0 00:28:40.401 [2024-07-14 02:15:45.831335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.401 [2024-07-14 02:15:45.831351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.401 [2024-07-14 02:15:45.831357] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.831364] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22f3fe0): datao=0, datal=3072, cccid=4 00:28:40.401 [2024-07-14 02:15:45.831372] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x235ae80) on tqpair(0x22f3fe0): expected_datao=0, payload_size=3072 00:28:40.401 [2024-07-14 02:15:45.831379] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.831415] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.831424] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.875888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.401 [2024-07-14 02:15:45.875908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.401 [2024-07-14 02:15:45.875916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.875923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235ae80) on tqpair=0x22f3fe0 00:28:40.401 [2024-07-14 02:15:45.875946] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.875956] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22f3fe0) 00:28:40.401 [2024-07-14 02:15:45.875967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.401 [2024-07-14 02:15:45.875998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235ae80, cid 4, qid 0 00:28:40.401 [2024-07-14 02:15:45.876183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.401 [2024-07-14 02:15:45.876196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.401 [2024-07-14 02:15:45.876202] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.876209] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22f3fe0): datao=0, datal=8, cccid=4 00:28:40.401 [2024-07-14 02:15:45.876217] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x235ae80) on tqpair(0x22f3fe0): expected_datao=0, payload_size=8 00:28:40.401 [2024-07-14 02:15:45.876224] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.876234] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.876241] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.918025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.401 [2024-07-14 02:15:45.918045] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.401 [2024-07-14 02:15:45.918052] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.918060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235ae80) on tqpair=0x22f3fe0 00:28:40.401 ===================================================== 00:28:40.401 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:40.401 ===================================================== 00:28:40.401 Controller Capabilities/Features 00:28:40.401 ================================ 00:28:40.401 Vendor ID: 0000 00:28:40.401 Subsystem Vendor ID: 0000 00:28:40.401 Serial Number: .................... 00:28:40.401 Model Number: ........................................ 00:28:40.401 Firmware Version: 24.09 00:28:40.401 Recommended Arb Burst: 0 00:28:40.401 IEEE OUI Identifier: 00 00 00 00:28:40.401 Multi-path I/O 00:28:40.401 May have multiple subsystem ports: No 00:28:40.401 May have multiple controllers: No 00:28:40.401 Associated with SR-IOV VF: No 00:28:40.401 Max Data Transfer Size: 131072 00:28:40.401 Max Number of Namespaces: 0 00:28:40.401 Max Number of I/O Queues: 1024 00:28:40.401 NVMe Specification Version (VS): 1.3 00:28:40.401 NVMe Specification Version (Identify): 1.3 00:28:40.401 Maximum Queue Entries: 128 00:28:40.401 Contiguous Queues Required: Yes 00:28:40.401 Arbitration Mechanisms Supported 00:28:40.401 Weighted Round Robin: Not Supported 00:28:40.401 Vendor Specific: Not Supported 00:28:40.401 Reset Timeout: 15000 ms 00:28:40.401 Doorbell Stride: 4 bytes 00:28:40.401 NVM Subsystem Reset: Not Supported 00:28:40.401 Command Sets Supported 00:28:40.401 NVM Command Set: Supported 00:28:40.401 Boot Partition: Not Supported 00:28:40.401 Memory Page Size Minimum: 4096 bytes 00:28:40.401 Memory Page Size Maximum: 4096 bytes 00:28:40.401 Persistent Memory Region: Not Supported 00:28:40.401 Optional Asynchronous Events Supported 00:28:40.401 Namespace Attribute Notices: Not Supported 00:28:40.401 Firmware Activation Notices: Not Supported 00:28:40.401 ANA Change Notices: Not Supported 00:28:40.401 PLE Aggregate Log Change Notices: Not Supported 00:28:40.401 LBA Status Info Alert Notices: Not Supported 00:28:40.401 EGE Aggregate Log Change Notices: Not Supported 00:28:40.401 Normal NVM Subsystem Shutdown event: Not Supported 00:28:40.401 Zone Descriptor Change Notices: Not Supported 00:28:40.401 Discovery Log Change Notices: Supported 00:28:40.401 Controller Attributes 00:28:40.401 128-bit Host Identifier: Not Supported 00:28:40.401 Non-Operational Permissive Mode: Not Supported 00:28:40.401 NVM Sets: Not Supported 00:28:40.401 Read Recovery Levels: Not Supported 00:28:40.401 Endurance Groups: Not Supported 00:28:40.401 Predictable Latency Mode: Not Supported 00:28:40.401 Traffic Based Keep ALive: Not Supported 00:28:40.401 Namespace Granularity: Not Supported 00:28:40.401 SQ Associations: Not Supported 00:28:40.401 UUID List: Not Supported 00:28:40.401 Multi-Domain Subsystem: Not Supported 00:28:40.401 Fixed Capacity Management: Not Supported 00:28:40.401 Variable Capacity Management: Not Supported 00:28:40.401 Delete Endurance Group: Not Supported 00:28:40.401 Delete NVM Set: Not Supported 00:28:40.401 Extended LBA Formats Supported: Not Supported 00:28:40.401 Flexible Data Placement Supported: Not Supported 00:28:40.401 00:28:40.401 Controller Memory Buffer Support 00:28:40.401 ================================ 00:28:40.401 Supported: No 00:28:40.401 00:28:40.401 Persistent Memory Region Support 00:28:40.401 ================================ 00:28:40.401 Supported: No 00:28:40.401 00:28:40.401 Admin Command Set Attributes 00:28:40.401 ============================ 00:28:40.401 Security Send/Receive: Not Supported 00:28:40.401 Format NVM: Not Supported 00:28:40.401 Firmware Activate/Download: Not Supported 00:28:40.401 Namespace Management: Not Supported 00:28:40.401 Device Self-Test: Not Supported 00:28:40.401 Directives: Not Supported 00:28:40.401 NVMe-MI: Not Supported 00:28:40.401 Virtualization Management: Not Supported 00:28:40.401 Doorbell Buffer Config: Not Supported 00:28:40.401 Get LBA Status Capability: Not Supported 00:28:40.401 Command & Feature Lockdown Capability: Not Supported 00:28:40.401 Abort Command Limit: 1 00:28:40.401 Async Event Request Limit: 4 00:28:40.401 Number of Firmware Slots: N/A 00:28:40.401 Firmware Slot 1 Read-Only: N/A 00:28:40.401 Firmware Activation Without Reset: N/A 00:28:40.401 Multiple Update Detection Support: N/A 00:28:40.401 Firmware Update Granularity: No Information Provided 00:28:40.401 Per-Namespace SMART Log: No 00:28:40.401 Asymmetric Namespace Access Log Page: Not Supported 00:28:40.401 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:40.401 Command Effects Log Page: Not Supported 00:28:40.401 Get Log Page Extended Data: Supported 00:28:40.401 Telemetry Log Pages: Not Supported 00:28:40.401 Persistent Event Log Pages: Not Supported 00:28:40.401 Supported Log Pages Log Page: May Support 00:28:40.401 Commands Supported & Effects Log Page: Not Supported 00:28:40.401 Feature Identifiers & Effects Log Page:May Support 00:28:40.401 NVMe-MI Commands & Effects Log Page: May Support 00:28:40.401 Data Area 4 for Telemetry Log: Not Supported 00:28:40.401 Error Log Page Entries Supported: 128 00:28:40.401 Keep Alive: Not Supported 00:28:40.401 00:28:40.401 NVM Command Set Attributes 00:28:40.401 ========================== 00:28:40.401 Submission Queue Entry Size 00:28:40.401 Max: 1 00:28:40.401 Min: 1 00:28:40.401 Completion Queue Entry Size 00:28:40.401 Max: 1 00:28:40.401 Min: 1 00:28:40.401 Number of Namespaces: 0 00:28:40.401 Compare Command: Not Supported 00:28:40.401 Write Uncorrectable Command: Not Supported 00:28:40.401 Dataset Management Command: Not Supported 00:28:40.401 Write Zeroes Command: Not Supported 00:28:40.401 Set Features Save Field: Not Supported 00:28:40.401 Reservations: Not Supported 00:28:40.401 Timestamp: Not Supported 00:28:40.401 Copy: Not Supported 00:28:40.401 Volatile Write Cache: Not Present 00:28:40.401 Atomic Write Unit (Normal): 1 00:28:40.401 Atomic Write Unit (PFail): 1 00:28:40.401 Atomic Compare & Write Unit: 1 00:28:40.401 Fused Compare & Write: Supported 00:28:40.401 Scatter-Gather List 00:28:40.401 SGL Command Set: Supported 00:28:40.401 SGL Keyed: Supported 00:28:40.401 SGL Bit Bucket Descriptor: Not Supported 00:28:40.401 SGL Metadata Pointer: Not Supported 00:28:40.401 Oversized SGL: Not Supported 00:28:40.401 SGL Metadata Address: Not Supported 00:28:40.401 SGL Offset: Supported 00:28:40.401 Transport SGL Data Block: Not Supported 00:28:40.401 Replay Protected Memory Block: Not Supported 00:28:40.401 00:28:40.401 Firmware Slot Information 00:28:40.401 ========================= 00:28:40.401 Active slot: 0 00:28:40.401 00:28:40.401 00:28:40.401 Error Log 00:28:40.401 ========= 00:28:40.401 00:28:40.401 Active Namespaces 00:28:40.401 ================= 00:28:40.401 Discovery Log Page 00:28:40.401 ================== 00:28:40.401 Generation Counter: 2 00:28:40.401 Number of Records: 2 00:28:40.401 Record Format: 0 00:28:40.401 00:28:40.401 Discovery Log Entry 0 00:28:40.401 ---------------------- 00:28:40.401 Transport Type: 3 (TCP) 00:28:40.401 Address Family: 1 (IPv4) 00:28:40.401 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:40.401 Entry Flags: 00:28:40.401 Duplicate Returned Information: 1 00:28:40.401 Explicit Persistent Connection Support for Discovery: 1 00:28:40.401 Transport Requirements: 00:28:40.401 Secure Channel: Not Required 00:28:40.401 Port ID: 0 (0x0000) 00:28:40.401 Controller ID: 65535 (0xffff) 00:28:40.401 Admin Max SQ Size: 128 00:28:40.401 Transport Service Identifier: 4420 00:28:40.401 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:40.401 Transport Address: 10.0.0.2 00:28:40.401 Discovery Log Entry 1 00:28:40.401 ---------------------- 00:28:40.401 Transport Type: 3 (TCP) 00:28:40.401 Address Family: 1 (IPv4) 00:28:40.401 Subsystem Type: 2 (NVM Subsystem) 00:28:40.401 Entry Flags: 00:28:40.401 Duplicate Returned Information: 0 00:28:40.401 Explicit Persistent Connection Support for Discovery: 0 00:28:40.401 Transport Requirements: 00:28:40.401 Secure Channel: Not Required 00:28:40.401 Port ID: 0 (0x0000) 00:28:40.401 Controller ID: 65535 (0xffff) 00:28:40.401 Admin Max SQ Size: 128 00:28:40.401 Transport Service Identifier: 4420 00:28:40.401 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:40.401 Transport Address: 10.0.0.2 [2024-07-14 02:15:45.918173] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:40.401 [2024-07-14 02:15:45.918210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235a880) on tqpair=0x22f3fe0 00:28:40.401 [2024-07-14 02:15:45.918222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.401 [2024-07-14 02:15:45.918232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235aa00) on tqpair=0x22f3fe0 00:28:40.401 [2024-07-14 02:15:45.918239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.401 [2024-07-14 02:15:45.918247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235ab80) on tqpair=0x22f3fe0 00:28:40.401 [2024-07-14 02:15:45.918255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.401 [2024-07-14 02:15:45.918263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235ad00) on tqpair=0x22f3fe0 00:28:40.401 [2024-07-14 02:15:45.918270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.401 [2024-07-14 02:15:45.918288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.918311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.918318] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f3fe0) 00:28:40.401 [2024-07-14 02:15:45.918328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.401 [2024-07-14 02:15:45.918352] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235ad00, cid 3, qid 0 00:28:40.401 [2024-07-14 02:15:45.918531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.401 [2024-07-14 02:15:45.918544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.401 [2024-07-14 02:15:45.918550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.918557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235ad00) on tqpair=0x22f3fe0 00:28:40.401 [2024-07-14 02:15:45.918572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.918581] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.918587] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f3fe0) 00:28:40.401 [2024-07-14 02:15:45.918597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.401 [2024-07-14 02:15:45.918623] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235ad00, cid 3, qid 0 00:28:40.401 [2024-07-14 02:15:45.918785] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.401 [2024-07-14 02:15:45.918797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.401 [2024-07-14 02:15:45.918804] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.918810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235ad00) on tqpair=0x22f3fe0 00:28:40.401 [2024-07-14 02:15:45.918819] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:40.401 [2024-07-14 02:15:45.918827] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:40.401 [2024-07-14 02:15:45.918843] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.922881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.922895] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f3fe0) 00:28:40.401 [2024-07-14 02:15:45.922907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.401 [2024-07-14 02:15:45.922932] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235ad00, cid 3, qid 0 00:28:40.401 [2024-07-14 02:15:45.923111] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.401 [2024-07-14 02:15:45.923127] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.401 [2024-07-14 02:15:45.923134] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.401 [2024-07-14 02:15:45.923141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235ad00) on tqpair=0x22f3fe0 00:28:40.401 [2024-07-14 02:15:45.923156] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:28:40.401 00:28:40.401 02:15:45 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:40.401 [2024-07-14 02:15:45.958906] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:40.402 [2024-07-14 02:15:45.958965] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1683525 ] 00:28:40.402 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.402 [2024-07-14 02:15:45.993657] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:40.402 [2024-07-14 02:15:45.993705] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:40.402 [2024-07-14 02:15:45.993715] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:40.402 [2024-07-14 02:15:45.993730] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:40.402 [2024-07-14 02:15:45.993739] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:40.402 [2024-07-14 02:15:45.994029] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:40.402 [2024-07-14 02:15:45.994072] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18fbfe0 0 00:28:40.402 [2024-07-14 02:15:46.004880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:40.402 [2024-07-14 02:15:46.004899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:40.402 [2024-07-14 02:15:46.004907] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:40.402 [2024-07-14 02:15:46.004913] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:40.402 [2024-07-14 02:15:46.004951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.004963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.004970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.004985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:40.402 [2024-07-14 02:15:46.005011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962880, cid 0, qid 0 00:28:40.402 [2024-07-14 02:15:46.012883] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.402 [2024-07-14 02:15:46.012901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.402 [2024-07-14 02:15:46.012908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.012915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962880) on tqpair=0x18fbfe0 00:28:40.402 [2024-07-14 02:15:46.012928] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:40.402 [2024-07-14 02:15:46.012938] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:40.402 [2024-07-14 02:15:46.012947] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:40.402 [2024-07-14 02:15:46.012965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.012973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.012980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.012991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.402 [2024-07-14 02:15:46.013015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962880, cid 0, qid 0 00:28:40.402 [2024-07-14 02:15:46.013204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.402 [2024-07-14 02:15:46.013217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.402 [2024-07-14 02:15:46.013224] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.013231] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962880) on tqpair=0x18fbfe0 00:28:40.402 [2024-07-14 02:15:46.013239] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:40.402 [2024-07-14 02:15:46.013252] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:40.402 [2024-07-14 02:15:46.013264] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.013271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.013278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.013288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.402 [2024-07-14 02:15:46.013310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962880, cid 0, qid 0 00:28:40.402 [2024-07-14 02:15:46.013465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.402 [2024-07-14 02:15:46.013480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.402 [2024-07-14 02:15:46.013491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.013498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962880) on tqpair=0x18fbfe0 00:28:40.402 [2024-07-14 02:15:46.013507] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:40.402 [2024-07-14 02:15:46.013521] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:40.402 [2024-07-14 02:15:46.013533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.013541] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.013547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.013558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.402 [2024-07-14 02:15:46.013579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962880, cid 0, qid 0 00:28:40.402 [2024-07-14 02:15:46.013757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.402 [2024-07-14 02:15:46.013772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.402 [2024-07-14 02:15:46.013779] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.013786] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962880) on tqpair=0x18fbfe0 00:28:40.402 [2024-07-14 02:15:46.013795] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:40.402 [2024-07-14 02:15:46.013811] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.013820] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.013826] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.013837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.402 [2024-07-14 02:15:46.013876] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962880, cid 0, qid 0 00:28:40.402 [2024-07-14 02:15:46.014062] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.402 [2024-07-14 02:15:46.014077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.402 [2024-07-14 02:15:46.014084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.014091] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962880) on tqpair=0x18fbfe0 00:28:40.402 [2024-07-14 02:15:46.014098] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:40.402 [2024-07-14 02:15:46.014107] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:40.402 [2024-07-14 02:15:46.014120] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:40.402 [2024-07-14 02:15:46.014230] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:40.402 [2024-07-14 02:15:46.014238] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:40.402 [2024-07-14 02:15:46.014264] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.014272] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.014278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.014288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.402 [2024-07-14 02:15:46.014308] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962880, cid 0, qid 0 00:28:40.402 [2024-07-14 02:15:46.014558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.402 [2024-07-14 02:15:46.014571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.402 [2024-07-14 02:15:46.014578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.014585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962880) on tqpair=0x18fbfe0 00:28:40.402 [2024-07-14 02:15:46.014593] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:40.402 [2024-07-14 02:15:46.014609] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.014618] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.014624] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.014635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.402 [2024-07-14 02:15:46.014656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962880, cid 0, qid 0 00:28:40.402 [2024-07-14 02:15:46.014806] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.402 [2024-07-14 02:15:46.014821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.402 [2024-07-14 02:15:46.014828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.014834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962880) on tqpair=0x18fbfe0 00:28:40.402 [2024-07-14 02:15:46.014842] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:40.402 [2024-07-14 02:15:46.014850] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:40.402 [2024-07-14 02:15:46.014864] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:40.402 [2024-07-14 02:15:46.014886] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:40.402 [2024-07-14 02:15:46.014900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.014907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.014918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.402 [2024-07-14 02:15:46.014940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962880, cid 0, qid 0 00:28:40.402 [2024-07-14 02:15:46.015124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.402 [2024-07-14 02:15:46.015137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.402 [2024-07-14 02:15:46.015144] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015150] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18fbfe0): datao=0, datal=4096, cccid=0 00:28:40.402 [2024-07-14 02:15:46.015158] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1962880) on tqpair(0x18fbfe0): expected_datao=0, payload_size=4096 00:28:40.402 [2024-07-14 02:15:46.015166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015195] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015210] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.402 [2024-07-14 02:15:46.015347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.402 [2024-07-14 02:15:46.015353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962880) on tqpair=0x18fbfe0 00:28:40.402 [2024-07-14 02:15:46.015374] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:40.402 [2024-07-14 02:15:46.015386] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:40.402 [2024-07-14 02:15:46.015395] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:40.402 [2024-07-14 02:15:46.015401] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:40.402 [2024-07-14 02:15:46.015409] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:40.402 [2024-07-14 02:15:46.015416] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:40.402 [2024-07-14 02:15:46.015430] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:40.402 [2024-07-14 02:15:46.015441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.015465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:40.402 [2024-07-14 02:15:46.015501] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962880, cid 0, qid 0 00:28:40.402 [2024-07-14 02:15:46.015727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.402 [2024-07-14 02:15:46.015740] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.402 [2024-07-14 02:15:46.015747] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015754] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962880) on tqpair=0x18fbfe0 00:28:40.402 [2024-07-14 02:15:46.015764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.015788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.402 [2024-07-14 02:15:46.015798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015805] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.015820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.402 [2024-07-14 02:15:46.015830] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.015875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.402 [2024-07-14 02:15:46.015886] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015899] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.015908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.402 [2024-07-14 02:15:46.015916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:40.402 [2024-07-14 02:15:46.015938] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:40.402 [2024-07-14 02:15:46.015951] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.015958] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.015968] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.402 [2024-07-14 02:15:46.015990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962880, cid 0, qid 0 00:28:40.402 [2024-07-14 02:15:46.016016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962a00, cid 1, qid 0 00:28:40.402 [2024-07-14 02:15:46.016025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962b80, cid 2, qid 0 00:28:40.402 [2024-07-14 02:15:46.016033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962d00, cid 3, qid 0 00:28:40.402 [2024-07-14 02:15:46.016040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962e80, cid 4, qid 0 00:28:40.402 [2024-07-14 02:15:46.016251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.402 [2024-07-14 02:15:46.016266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.402 [2024-07-14 02:15:46.016273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.016280] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962e80) on tqpair=0x18fbfe0 00:28:40.402 [2024-07-14 02:15:46.016288] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:40.402 [2024-07-14 02:15:46.016313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:40.402 [2024-07-14 02:15:46.016327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:40.402 [2024-07-14 02:15:46.016338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:40.402 [2024-07-14 02:15:46.016349] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.016356] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.016377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.016387] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:40.402 [2024-07-14 02:15:46.016407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962e80, cid 4, qid 0 00:28:40.402 [2024-07-14 02:15:46.016608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.402 [2024-07-14 02:15:46.016624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.402 [2024-07-14 02:15:46.016631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.016638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962e80) on tqpair=0x18fbfe0 00:28:40.402 [2024-07-14 02:15:46.016702] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:40.402 [2024-07-14 02:15:46.016720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:40.402 [2024-07-14 02:15:46.016750] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.016757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.016768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.402 [2024-07-14 02:15:46.016803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962e80, cid 4, qid 0 00:28:40.402 [2024-07-14 02:15:46.020880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.402 [2024-07-14 02:15:46.020897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.402 [2024-07-14 02:15:46.020904] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.020910] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18fbfe0): datao=0, datal=4096, cccid=4 00:28:40.402 [2024-07-14 02:15:46.020918] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1962e80) on tqpair(0x18fbfe0): expected_datao=0, payload_size=4096 00:28:40.402 [2024-07-14 02:15:46.020925] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.020935] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.020942] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.060885] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.402 [2024-07-14 02:15:46.060904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.402 [2024-07-14 02:15:46.060911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.060918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962e80) on tqpair=0x18fbfe0 00:28:40.402 [2024-07-14 02:15:46.060940] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:40.402 [2024-07-14 02:15:46.060959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:40.402 [2024-07-14 02:15:46.060978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:40.402 [2024-07-14 02:15:46.060991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.060999] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18fbfe0) 00:28:40.402 [2024-07-14 02:15:46.061010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.402 [2024-07-14 02:15:46.061033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962e80, cid 4, qid 0 00:28:40.402 [2024-07-14 02:15:46.061248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.402 [2024-07-14 02:15:46.061264] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.402 [2024-07-14 02:15:46.061271] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.061278] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18fbfe0): datao=0, datal=4096, cccid=4 00:28:40.402 [2024-07-14 02:15:46.061286] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1962e80) on tqpair(0x18fbfe0): expected_datao=0, payload_size=4096 00:28:40.402 [2024-07-14 02:15:46.061293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.061317] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.402 [2024-07-14 02:15:46.061327] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.662 [2024-07-14 02:15:46.106879] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.662 [2024-07-14 02:15:46.106900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.662 [2024-07-14 02:15:46.106908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.662 [2024-07-14 02:15:46.106915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962e80) on tqpair=0x18fbfe0 00:28:40.662 [2024-07-14 02:15:46.106940] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:40.662 [2024-07-14 02:15:46.106960] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:40.662 [2024-07-14 02:15:46.106975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.662 [2024-07-14 02:15:46.106983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18fbfe0) 00:28:40.663 [2024-07-14 02:15:46.106999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.663 [2024-07-14 02:15:46.107024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962e80, cid 4, qid 0 00:28:40.663 [2024-07-14 02:15:46.107226] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.663 [2024-07-14 02:15:46.107243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.663 [2024-07-14 02:15:46.107250] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.107257] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18fbfe0): datao=0, datal=4096, cccid=4 00:28:40.663 [2024-07-14 02:15:46.107265] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1962e80) on tqpair(0x18fbfe0): expected_datao=0, payload_size=4096 00:28:40.663 [2024-07-14 02:15:46.107272] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.107294] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.107303] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.149087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.663 [2024-07-14 02:15:46.149105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.663 [2024-07-14 02:15:46.149113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.149120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962e80) on tqpair=0x18fbfe0 00:28:40.663 [2024-07-14 02:15:46.149133] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:40.663 [2024-07-14 02:15:46.149148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:40.663 [2024-07-14 02:15:46.149164] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:40.663 [2024-07-14 02:15:46.149175] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:40.663 [2024-07-14 02:15:46.149183] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:40.663 [2024-07-14 02:15:46.149192] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:40.663 [2024-07-14 02:15:46.149201] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:40.663 [2024-07-14 02:15:46.149209] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:40.663 [2024-07-14 02:15:46.149218] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:40.663 [2024-07-14 02:15:46.149236] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.149245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18fbfe0) 00:28:40.663 [2024-07-14 02:15:46.149257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.663 [2024-07-14 02:15:46.149268] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.149275] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.149281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18fbfe0) 00:28:40.663 [2024-07-14 02:15:46.149291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.663 [2024-07-14 02:15:46.149317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962e80, cid 4, qid 0 00:28:40.663 [2024-07-14 02:15:46.149333] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1963000, cid 5, qid 0 00:28:40.663 [2024-07-14 02:15:46.149486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.663 [2024-07-14 02:15:46.149499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.663 [2024-07-14 02:15:46.149506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.149512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962e80) on tqpair=0x18fbfe0 00:28:40.663 [2024-07-14 02:15:46.149522] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.663 [2024-07-14 02:15:46.149531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.663 [2024-07-14 02:15:46.149538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.149545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1963000) on tqpair=0x18fbfe0 00:28:40.663 [2024-07-14 02:15:46.149560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.149568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18fbfe0) 00:28:40.663 [2024-07-14 02:15:46.149579] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.663 [2024-07-14 02:15:46.149614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1963000, cid 5, qid 0 00:28:40.663 [2024-07-14 02:15:46.149844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.663 [2024-07-14 02:15:46.149860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.663 [2024-07-14 02:15:46.149875] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.149883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1963000) on tqpair=0x18fbfe0 00:28:40.663 [2024-07-14 02:15:46.149899] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.149908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18fbfe0) 00:28:40.663 [2024-07-14 02:15:46.149919] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.663 [2024-07-14 02:15:46.149940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1963000, cid 5, qid 0 00:28:40.663 [2024-07-14 02:15:46.150091] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.663 [2024-07-14 02:15:46.150106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.663 [2024-07-14 02:15:46.150113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.150120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1963000) on tqpair=0x18fbfe0 00:28:40.663 [2024-07-14 02:15:46.150136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.150145] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18fbfe0) 00:28:40.663 [2024-07-14 02:15:46.150155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.663 [2024-07-14 02:15:46.150176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1963000, cid 5, qid 0 00:28:40.663 [2024-07-14 02:15:46.150328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.663 [2024-07-14 02:15:46.150341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.663 [2024-07-14 02:15:46.150348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.150355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1963000) on tqpair=0x18fbfe0 00:28:40.663 [2024-07-14 02:15:46.150378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.150389] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18fbfe0) 00:28:40.663 [2024-07-14 02:15:46.150400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.663 [2024-07-14 02:15:46.150415] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.150423] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18fbfe0) 00:28:40.663 [2024-07-14 02:15:46.150432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.663 [2024-07-14 02:15:46.150444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.150466] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x18fbfe0) 00:28:40.663 [2024-07-14 02:15:46.150475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.663 [2024-07-14 02:15:46.150486] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.150493] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18fbfe0) 00:28:40.663 [2024-07-14 02:15:46.150502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.663 [2024-07-14 02:15:46.150523] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1963000, cid 5, qid 0 00:28:40.663 [2024-07-14 02:15:46.150549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962e80, cid 4, qid 0 00:28:40.663 [2024-07-14 02:15:46.150557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1963180, cid 6, qid 0 00:28:40.663 [2024-07-14 02:15:46.150565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1963300, cid 7, qid 0 00:28:40.663 [2024-07-14 02:15:46.150844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.663 [2024-07-14 02:15:46.150860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.663 [2024-07-14 02:15:46.154877] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.154888] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18fbfe0): datao=0, datal=8192, cccid=5 00:28:40.663 [2024-07-14 02:15:46.154896] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1963000) on tqpair(0x18fbfe0): expected_datao=0, payload_size=8192 00:28:40.663 [2024-07-14 02:15:46.154904] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.154928] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.154938] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.154947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.663 [2024-07-14 02:15:46.154956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.663 [2024-07-14 02:15:46.154962] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.154969] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18fbfe0): datao=0, datal=512, cccid=4 00:28:40.663 [2024-07-14 02:15:46.154976] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1962e80) on tqpair(0x18fbfe0): expected_datao=0, payload_size=512 00:28:40.663 [2024-07-14 02:15:46.154984] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.154993] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.155000] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.155008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.663 [2024-07-14 02:15:46.155017] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.663 [2024-07-14 02:15:46.155023] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.155029] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18fbfe0): datao=0, datal=512, cccid=6 00:28:40.663 [2024-07-14 02:15:46.155037] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1963180) on tqpair(0x18fbfe0): expected_datao=0, payload_size=512 00:28:40.663 [2024-07-14 02:15:46.155048] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.155058] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.155064] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.155073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.663 [2024-07-14 02:15:46.155081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.663 [2024-07-14 02:15:46.155088] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.663 [2024-07-14 02:15:46.155094] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18fbfe0): datao=0, datal=4096, cccid=7 00:28:40.664 [2024-07-14 02:15:46.155102] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1963300) on tqpair(0x18fbfe0): expected_datao=0, payload_size=4096 00:28:40.664 [2024-07-14 02:15:46.155109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.664 [2024-07-14 02:15:46.155118] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.664 [2024-07-14 02:15:46.155125] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.664 [2024-07-14 02:15:46.155134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.664 [2024-07-14 02:15:46.155142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.664 [2024-07-14 02:15:46.155163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.664 [2024-07-14 02:15:46.155169] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1963000) on tqpair=0x18fbfe0 00:28:40.664 [2024-07-14 02:15:46.155186] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.664 [2024-07-14 02:15:46.155196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.664 [2024-07-14 02:15:46.155202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.664 [2024-07-14 02:15:46.155209] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962e80) on tqpair=0x18fbfe0 00:28:40.664 [2024-07-14 02:15:46.155222] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.664 [2024-07-14 02:15:46.155231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.664 [2024-07-14 02:15:46.155238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.664 [2024-07-14 02:15:46.155244] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1963180) on tqpair=0x18fbfe0 00:28:40.664 [2024-07-14 02:15:46.155253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.664 [2024-07-14 02:15:46.155262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.664 [2024-07-14 02:15:46.155268] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.664 [2024-07-14 02:15:46.155275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1963300) on tqpair=0x18fbfe0 00:28:40.664 ===================================================== 00:28:40.664 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.664 ===================================================== 00:28:40.664 Controller Capabilities/Features 00:28:40.664 ================================ 00:28:40.664 Vendor ID: 8086 00:28:40.664 Subsystem Vendor ID: 8086 00:28:40.664 Serial Number: SPDK00000000000001 00:28:40.664 Model Number: SPDK bdev Controller 00:28:40.664 Firmware Version: 24.09 00:28:40.664 Recommended Arb Burst: 6 00:28:40.664 IEEE OUI Identifier: e4 d2 5c 00:28:40.664 Multi-path I/O 00:28:40.664 May have multiple subsystem ports: Yes 00:28:40.664 May have multiple controllers: Yes 00:28:40.664 Associated with SR-IOV VF: No 00:28:40.664 Max Data Transfer Size: 131072 00:28:40.664 Max Number of Namespaces: 32 00:28:40.664 Max Number of I/O Queues: 127 00:28:40.664 NVMe Specification Version (VS): 1.3 00:28:40.664 NVMe Specification Version (Identify): 1.3 00:28:40.664 Maximum Queue Entries: 128 00:28:40.664 Contiguous Queues Required: Yes 00:28:40.664 Arbitration Mechanisms Supported 00:28:40.664 Weighted Round Robin: Not Supported 00:28:40.664 Vendor Specific: Not Supported 00:28:40.664 Reset Timeout: 15000 ms 00:28:40.664 Doorbell Stride: 4 bytes 00:28:40.664 NVM Subsystem Reset: Not Supported 00:28:40.664 Command Sets Supported 00:28:40.664 NVM Command Set: Supported 00:28:40.664 Boot Partition: Not Supported 00:28:40.664 Memory Page Size Minimum: 4096 bytes 00:28:40.664 Memory Page Size Maximum: 4096 bytes 00:28:40.664 Persistent Memory Region: Not Supported 00:28:40.664 Optional Asynchronous Events Supported 00:28:40.664 Namespace Attribute Notices: Supported 00:28:40.664 Firmware Activation Notices: Not Supported 00:28:40.664 ANA Change Notices: Not Supported 00:28:40.664 PLE Aggregate Log Change Notices: Not Supported 00:28:40.664 LBA Status Info Alert Notices: Not Supported 00:28:40.664 EGE Aggregate Log Change Notices: Not Supported 00:28:40.664 Normal NVM Subsystem Shutdown event: Not Supported 00:28:40.664 Zone Descriptor Change Notices: Not Supported 00:28:40.664 Discovery Log Change Notices: Not Supported 00:28:40.664 Controller Attributes 00:28:40.664 128-bit Host Identifier: Supported 00:28:40.664 Non-Operational Permissive Mode: Not Supported 00:28:40.664 NVM Sets: Not Supported 00:28:40.664 Read Recovery Levels: Not Supported 00:28:40.664 Endurance Groups: Not Supported 00:28:40.664 Predictable Latency Mode: Not Supported 00:28:40.664 Traffic Based Keep ALive: Not Supported 00:28:40.664 Namespace Granularity: Not Supported 00:28:40.664 SQ Associations: Not Supported 00:28:40.664 UUID List: Not Supported 00:28:40.664 Multi-Domain Subsystem: Not Supported 00:28:40.664 Fixed Capacity Management: Not Supported 00:28:40.664 Variable Capacity Management: Not Supported 00:28:40.664 Delete Endurance Group: Not Supported 00:28:40.664 Delete NVM Set: Not Supported 00:28:40.664 Extended LBA Formats Supported: Not Supported 00:28:40.664 Flexible Data Placement Supported: Not Supported 00:28:40.664 00:28:40.664 Controller Memory Buffer Support 00:28:40.664 ================================ 00:28:40.664 Supported: No 00:28:40.664 00:28:40.664 Persistent Memory Region Support 00:28:40.664 ================================ 00:28:40.664 Supported: No 00:28:40.664 00:28:40.664 Admin Command Set Attributes 00:28:40.664 ============================ 00:28:40.664 Security Send/Receive: Not Supported 00:28:40.664 Format NVM: Not Supported 00:28:40.664 Firmware Activate/Download: Not Supported 00:28:40.664 Namespace Management: Not Supported 00:28:40.664 Device Self-Test: Not Supported 00:28:40.664 Directives: Not Supported 00:28:40.664 NVMe-MI: Not Supported 00:28:40.664 Virtualization Management: Not Supported 00:28:40.664 Doorbell Buffer Config: Not Supported 00:28:40.664 Get LBA Status Capability: Not Supported 00:28:40.664 Command & Feature Lockdown Capability: Not Supported 00:28:40.664 Abort Command Limit: 4 00:28:40.664 Async Event Request Limit: 4 00:28:40.664 Number of Firmware Slots: N/A 00:28:40.664 Firmware Slot 1 Read-Only: N/A 00:28:40.664 Firmware Activation Without Reset: N/A 00:28:40.664 Multiple Update Detection Support: N/A 00:28:40.664 Firmware Update Granularity: No Information Provided 00:28:40.664 Per-Namespace SMART Log: No 00:28:40.664 Asymmetric Namespace Access Log Page: Not Supported 00:28:40.664 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:40.664 Command Effects Log Page: Supported 00:28:40.664 Get Log Page Extended Data: Supported 00:28:40.664 Telemetry Log Pages: Not Supported 00:28:40.664 Persistent Event Log Pages: Not Supported 00:28:40.664 Supported Log Pages Log Page: May Support 00:28:40.664 Commands Supported & Effects Log Page: Not Supported 00:28:40.664 Feature Identifiers & Effects Log Page:May Support 00:28:40.664 NVMe-MI Commands & Effects Log Page: May Support 00:28:40.664 Data Area 4 for Telemetry Log: Not Supported 00:28:40.664 Error Log Page Entries Supported: 128 00:28:40.664 Keep Alive: Supported 00:28:40.664 Keep Alive Granularity: 10000 ms 00:28:40.664 00:28:40.664 NVM Command Set Attributes 00:28:40.664 ========================== 00:28:40.664 Submission Queue Entry Size 00:28:40.664 Max: 64 00:28:40.664 Min: 64 00:28:40.664 Completion Queue Entry Size 00:28:40.664 Max: 16 00:28:40.664 Min: 16 00:28:40.664 Number of Namespaces: 32 00:28:40.664 Compare Command: Supported 00:28:40.664 Write Uncorrectable Command: Not Supported 00:28:40.664 Dataset Management Command: Supported 00:28:40.664 Write Zeroes Command: Supported 00:28:40.664 Set Features Save Field: Not Supported 00:28:40.664 Reservations: Supported 00:28:40.664 Timestamp: Not Supported 00:28:40.664 Copy: Supported 00:28:40.664 Volatile Write Cache: Present 00:28:40.664 Atomic Write Unit (Normal): 1 00:28:40.664 Atomic Write Unit (PFail): 1 00:28:40.664 Atomic Compare & Write Unit: 1 00:28:40.664 Fused Compare & Write: Supported 00:28:40.664 Scatter-Gather List 00:28:40.664 SGL Command Set: Supported 00:28:40.664 SGL Keyed: Supported 00:28:40.664 SGL Bit Bucket Descriptor: Not Supported 00:28:40.664 SGL Metadata Pointer: Not Supported 00:28:40.664 Oversized SGL: Not Supported 00:28:40.664 SGL Metadata Address: Not Supported 00:28:40.664 SGL Offset: Supported 00:28:40.664 Transport SGL Data Block: Not Supported 00:28:40.664 Replay Protected Memory Block: Not Supported 00:28:40.664 00:28:40.664 Firmware Slot Information 00:28:40.664 ========================= 00:28:40.664 Active slot: 1 00:28:40.664 Slot 1 Firmware Revision: 24.09 00:28:40.664 00:28:40.664 00:28:40.664 Commands Supported and Effects 00:28:40.664 ============================== 00:28:40.664 Admin Commands 00:28:40.664 -------------- 00:28:40.664 Get Log Page (02h): Supported 00:28:40.664 Identify (06h): Supported 00:28:40.664 Abort (08h): Supported 00:28:40.664 Set Features (09h): Supported 00:28:40.664 Get Features (0Ah): Supported 00:28:40.664 Asynchronous Event Request (0Ch): Supported 00:28:40.664 Keep Alive (18h): Supported 00:28:40.664 I/O Commands 00:28:40.664 ------------ 00:28:40.664 Flush (00h): Supported LBA-Change 00:28:40.664 Write (01h): Supported LBA-Change 00:28:40.664 Read (02h): Supported 00:28:40.664 Compare (05h): Supported 00:28:40.664 Write Zeroes (08h): Supported LBA-Change 00:28:40.664 Dataset Management (09h): Supported LBA-Change 00:28:40.664 Copy (19h): Supported LBA-Change 00:28:40.664 00:28:40.664 Error Log 00:28:40.664 ========= 00:28:40.664 00:28:40.664 Arbitration 00:28:40.664 =========== 00:28:40.664 Arbitration Burst: 1 00:28:40.664 00:28:40.664 Power Management 00:28:40.664 ================ 00:28:40.664 Number of Power States: 1 00:28:40.664 Current Power State: Power State #0 00:28:40.665 Power State #0: 00:28:40.665 Max Power: 0.00 W 00:28:40.665 Non-Operational State: Operational 00:28:40.665 Entry Latency: Not Reported 00:28:40.665 Exit Latency: Not Reported 00:28:40.665 Relative Read Throughput: 0 00:28:40.665 Relative Read Latency: 0 00:28:40.665 Relative Write Throughput: 0 00:28:40.665 Relative Write Latency: 0 00:28:40.665 Idle Power: Not Reported 00:28:40.665 Active Power: Not Reported 00:28:40.665 Non-Operational Permissive Mode: Not Supported 00:28:40.665 00:28:40.665 Health Information 00:28:40.665 ================== 00:28:40.665 Critical Warnings: 00:28:40.665 Available Spare Space: OK 00:28:40.665 Temperature: OK 00:28:40.665 Device Reliability: OK 00:28:40.665 Read Only: No 00:28:40.665 Volatile Memory Backup: OK 00:28:40.665 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:40.665 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:40.665 Available Spare: 0% 00:28:40.665 Available Spare Threshold: 0% 00:28:40.665 Life Percentage Used:[2024-07-14 02:15:46.155381] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.155392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18fbfe0) 00:28:40.665 [2024-07-14 02:15:46.155403] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.665 [2024-07-14 02:15:46.155425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1963300, cid 7, qid 0 00:28:40.665 [2024-07-14 02:15:46.155632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.665 [2024-07-14 02:15:46.155648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.665 [2024-07-14 02:15:46.155654] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.155661] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1963300) on tqpair=0x18fbfe0 00:28:40.665 [2024-07-14 02:15:46.155707] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:40.665 [2024-07-14 02:15:46.155726] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962880) on tqpair=0x18fbfe0 00:28:40.665 [2024-07-14 02:15:46.155752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.665 [2024-07-14 02:15:46.155765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962a00) on tqpair=0x18fbfe0 00:28:40.665 [2024-07-14 02:15:46.155773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.665 [2024-07-14 02:15:46.155781] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962b80) on tqpair=0x18fbfe0 00:28:40.665 [2024-07-14 02:15:46.155788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.665 [2024-07-14 02:15:46.155796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962d00) on tqpair=0x18fbfe0 00:28:40.665 [2024-07-14 02:15:46.155819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.665 [2024-07-14 02:15:46.155831] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.155839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.155844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18fbfe0) 00:28:40.665 [2024-07-14 02:15:46.155855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.665 [2024-07-14 02:15:46.155902] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962d00, cid 3, qid 0 00:28:40.665 [2024-07-14 02:15:46.156055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.665 [2024-07-14 02:15:46.156067] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.665 [2024-07-14 02:15:46.156074] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.156081] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962d00) on tqpair=0x18fbfe0 00:28:40.665 [2024-07-14 02:15:46.156093] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.156100] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.156107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18fbfe0) 00:28:40.665 [2024-07-14 02:15:46.156117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.665 [2024-07-14 02:15:46.156143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962d00, cid 3, qid 0 00:28:40.665 [2024-07-14 02:15:46.156304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.665 [2024-07-14 02:15:46.156319] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.665 [2024-07-14 02:15:46.156326] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.156333] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962d00) on tqpair=0x18fbfe0 00:28:40.665 [2024-07-14 02:15:46.156340] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:40.665 [2024-07-14 02:15:46.156348] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:40.665 [2024-07-14 02:15:46.156364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.156373] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.156379] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18fbfe0) 00:28:40.665 [2024-07-14 02:15:46.156390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.665 [2024-07-14 02:15:46.156410] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962d00, cid 3, qid 0 00:28:40.665 [2024-07-14 02:15:46.156578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.665 [2024-07-14 02:15:46.156593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.665 [2024-07-14 02:15:46.156599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.156606] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962d00) on tqpair=0x18fbfe0 00:28:40.665 [2024-07-14 02:15:46.156627] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.156636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.156643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18fbfe0) 00:28:40.665 [2024-07-14 02:15:46.156654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.665 [2024-07-14 02:15:46.156674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962d00, cid 3, qid 0 00:28:40.665 [2024-07-14 02:15:46.156814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.665 [2024-07-14 02:15:46.156826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.665 [2024-07-14 02:15:46.156833] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.156840] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962d00) on tqpair=0x18fbfe0 00:28:40.665 [2024-07-14 02:15:46.156855] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.156864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.156879] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18fbfe0) 00:28:40.665 [2024-07-14 02:15:46.156890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.665 [2024-07-14 02:15:46.156911] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962d00, cid 3, qid 0 00:28:40.665 [2024-07-14 02:15:46.157059] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.665 [2024-07-14 02:15:46.157074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.665 [2024-07-14 02:15:46.157081] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.157088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962d00) on tqpair=0x18fbfe0 00:28:40.665 [2024-07-14 02:15:46.157104] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.157113] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.157120] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18fbfe0) 00:28:40.665 [2024-07-14 02:15:46.157130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.665 [2024-07-14 02:15:46.157151] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962d00, cid 3, qid 0 00:28:40.665 [2024-07-14 02:15:46.160878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.665 [2024-07-14 02:15:46.160895] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.665 [2024-07-14 02:15:46.160902] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.160909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962d00) on tqpair=0x18fbfe0 00:28:40.665 [2024-07-14 02:15:46.160928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.160937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.160944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18fbfe0) 00:28:40.665 [2024-07-14 02:15:46.160955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.665 [2024-07-14 02:15:46.160977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962d00, cid 3, qid 0 00:28:40.665 [2024-07-14 02:15:46.161139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.665 [2024-07-14 02:15:46.161154] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.665 [2024-07-14 02:15:46.161161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.665 [2024-07-14 02:15:46.161168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962d00) on tqpair=0x18fbfe0 00:28:40.665 [2024-07-14 02:15:46.161181] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:28:40.665 0% 00:28:40.665 Data Units Read: 0 00:28:40.665 Data Units Written: 0 00:28:40.665 Host Read Commands: 0 00:28:40.665 Host Write Commands: 0 00:28:40.665 Controller Busy Time: 0 minutes 00:28:40.665 Power Cycles: 0 00:28:40.665 Power On Hours: 0 hours 00:28:40.665 Unsafe Shutdowns: 0 00:28:40.665 Unrecoverable Media Errors: 0 00:28:40.665 Lifetime Error Log Entries: 0 00:28:40.665 Warning Temperature Time: 0 minutes 00:28:40.665 Critical Temperature Time: 0 minutes 00:28:40.665 00:28:40.665 Number of Queues 00:28:40.665 ================ 00:28:40.665 Number of I/O Submission Queues: 127 00:28:40.665 Number of I/O Completion Queues: 127 00:28:40.665 00:28:40.665 Active Namespaces 00:28:40.665 ================= 00:28:40.665 Namespace ID:1 00:28:40.665 Error Recovery Timeout: Unlimited 00:28:40.665 Command Set Identifier: NVM (00h) 00:28:40.665 Deallocate: Supported 00:28:40.665 Deallocated/Unwritten Error: Not Supported 00:28:40.665 Deallocated Read Value: Unknown 00:28:40.665 Deallocate in Write Zeroes: Not Supported 00:28:40.665 Deallocated Guard Field: 0xFFFF 00:28:40.665 Flush: Supported 00:28:40.665 Reservation: Supported 00:28:40.665 Namespace Sharing Capabilities: Multiple Controllers 00:28:40.665 Size (in LBAs): 131072 (0GiB) 00:28:40.665 Capacity (in LBAs): 131072 (0GiB) 00:28:40.666 Utilization (in LBAs): 131072 (0GiB) 00:28:40.666 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:40.666 EUI64: ABCDEF0123456789 00:28:40.666 UUID: 3c46b965-058c-45d0-9d88-33d0acc11e4c 00:28:40.666 Thin Provisioning: Not Supported 00:28:40.666 Per-NS Atomic Units: Yes 00:28:40.666 Atomic Boundary Size (Normal): 0 00:28:40.666 Atomic Boundary Size (PFail): 0 00:28:40.666 Atomic Boundary Offset: 0 00:28:40.666 Maximum Single Source Range Length: 65535 00:28:40.666 Maximum Copy Length: 65535 00:28:40.666 Maximum Source Range Count: 1 00:28:40.666 NGUID/EUI64 Never Reused: No 00:28:40.666 Namespace Write Protected: No 00:28:40.666 Number of LBA Formats: 1 00:28:40.666 Current LBA Format: LBA Format #00 00:28:40.666 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:40.666 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:40.666 rmmod nvme_tcp 00:28:40.666 rmmod nvme_fabrics 00:28:40.666 rmmod nvme_keyring 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1683394 ']' 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1683394 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1683394 ']' 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1683394 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1683394 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1683394' 00:28:40.666 killing process with pid 1683394 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1683394 00:28:40.666 02:15:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1683394 00:28:40.924 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:40.924 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:40.924 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:40.924 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:40.924 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:40.924 02:15:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.924 02:15:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:40.924 02:15:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.460 02:15:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:43.460 00:28:43.460 real 0m5.526s 00:28:43.460 user 0m4.839s 00:28:43.460 sys 0m1.913s 00:28:43.460 02:15:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:43.460 02:15:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:43.460 ************************************ 00:28:43.460 END TEST nvmf_identify 00:28:43.460 ************************************ 00:28:43.460 02:15:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:43.460 02:15:48 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:43.460 02:15:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:43.460 02:15:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.460 02:15:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.460 ************************************ 00:28:43.460 START TEST nvmf_perf 00:28:43.460 ************************************ 00:28:43.460 02:15:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:43.460 * Looking for test storage... 00:28:43.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:43.460 02:15:48 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.460 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:43.460 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.460 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.460 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.460 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.460 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.460 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.460 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.460 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.460 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.460 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.460 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:43.460 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:43.461 02:15:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:44.835 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:44.836 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:44.836 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:44.836 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:44.836 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:44.836 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:45.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:28:45.094 00:28:45.094 --- 10.0.0.2 ping statistics --- 00:28:45.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.094 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:28:45.094 00:28:45.094 --- 10.0.0.1 ping statistics --- 00:28:45.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.094 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:45.094 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:45.095 02:15:50 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:45.095 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:45.095 02:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:45.095 02:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:45.095 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1685467 00:28:45.095 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:45.095 02:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1685467 00:28:45.095 02:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1685467 ']' 00:28:45.095 02:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.095 02:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:45.095 02:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.095 02:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:45.095 02:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:45.095 [2024-07-14 02:15:50.721025] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:45.095 [2024-07-14 02:15:50.721103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.095 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.353 [2024-07-14 02:15:50.788322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:45.353 [2024-07-14 02:15:50.879255] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.353 [2024-07-14 02:15:50.879317] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.353 [2024-07-14 02:15:50.879334] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.353 [2024-07-14 02:15:50.879348] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.353 [2024-07-14 02:15:50.879359] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.353 [2024-07-14 02:15:50.879454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.353 [2024-07-14 02:15:50.879502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.353 [2024-07-14 02:15:50.879594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:45.353 [2024-07-14 02:15:50.879596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.353 02:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:45.353 02:15:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:28:45.353 02:15:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:45.353 02:15:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:45.353 02:15:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:45.353 02:15:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.353 02:15:51 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:45.353 02:15:51 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:48.639 02:15:54 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:48.639 02:15:54 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:48.897 02:15:54 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:48.897 02:15:54 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:49.155 02:15:54 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:49.155 02:15:54 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:49.155 02:15:54 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:49.155 02:15:54 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:49.155 02:15:54 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:49.413 [2024-07-14 02:15:54.881064] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.413 02:15:54 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:49.671 02:15:55 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:49.671 02:15:55 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:49.929 02:15:55 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:49.929 02:15:55 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:50.187 02:15:55 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:50.445 [2024-07-14 02:15:55.888730] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.445 02:15:55 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:50.703 02:15:56 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:50.703 02:15:56 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:50.703 02:15:56 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:50.703 02:15:56 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:52.079 Initializing NVMe Controllers 00:28:52.079 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:52.080 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:52.080 Initialization complete. Launching workers. 00:28:52.080 ======================================================== 00:28:52.080 Latency(us) 00:28:52.080 Device Information : IOPS MiB/s Average min max 00:28:52.080 PCIE (0000:88:00.0) NSID 1 from core 0: 85542.60 334.15 373.56 43.12 4324.71 00:28:52.080 ======================================================== 00:28:52.080 Total : 85542.60 334.15 373.56 43.12 4324.71 00:28:52.080 00:28:52.080 02:15:57 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:52.080 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.014 Initializing NVMe Controllers 00:28:53.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:53.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:53.014 Initialization complete. Launching workers. 00:28:53.014 ======================================================== 00:28:53.014 Latency(us) 00:28:53.014 Device Information : IOPS MiB/s Average min max 00:28:53.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 71.89 0.28 14480.30 246.91 45670.35 00:28:53.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 62.90 0.25 16530.13 7912.65 51869.80 00:28:53.014 ======================================================== 00:28:53.014 Total : 134.79 0.53 15436.89 246.91 51869.80 00:28:53.014 00:28:53.014 02:15:58 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:53.014 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.391 Initializing NVMe Controllers 00:28:54.391 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:54.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:54.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:54.391 Initialization complete. Launching workers. 00:28:54.391 ======================================================== 00:28:54.391 Latency(us) 00:28:54.391 Device Information : IOPS MiB/s Average min max 00:28:54.391 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8035.48 31.39 3986.14 558.26 11024.24 00:28:54.391 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3881.29 15.16 8256.48 6251.78 15638.07 00:28:54.391 ======================================================== 00:28:54.391 Total : 11916.77 46.55 5376.99 558.26 15638.07 00:28:54.391 00:28:54.391 02:15:59 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:54.391 02:15:59 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:54.391 02:15:59 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:54.391 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.924 Initializing NVMe Controllers 00:28:56.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:56.924 Controller IO queue size 128, less than required. 00:28:56.924 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.924 Controller IO queue size 128, less than required. 00:28:56.924 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:56.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:56.924 Initialization complete. Launching workers. 00:28:56.924 ======================================================== 00:28:56.924 Latency(us) 00:28:56.924 Device Information : IOPS MiB/s Average min max 00:28:56.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 889.94 222.49 147315.32 85197.86 192388.63 00:28:56.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 600.12 150.03 226602.92 78287.23 332640.58 00:28:56.924 ======================================================== 00:28:56.924 Total : 1490.07 372.52 179248.39 78287.23 332640.58 00:28:56.924 00:28:56.925 02:16:02 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:56.925 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.185 No valid NVMe controllers or AIO or URING devices found 00:28:57.185 Initializing NVMe Controllers 00:28:57.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:57.185 Controller IO queue size 128, less than required. 00:28:57.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:57.185 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:57.185 Controller IO queue size 128, less than required. 00:28:57.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:57.185 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:57.185 WARNING: Some requested NVMe devices were skipped 00:28:57.185 02:16:02 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:57.185 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.475 Initializing NVMe Controllers 00:29:00.475 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:00.475 Controller IO queue size 128, less than required. 00:29:00.475 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:00.475 Controller IO queue size 128, less than required. 00:29:00.475 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:00.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:00.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:00.475 Initialization complete. Launching workers. 00:29:00.475 00:29:00.475 ==================== 00:29:00.475 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:00.475 TCP transport: 00:29:00.475 polls: 32586 00:29:00.475 idle_polls: 10365 00:29:00.475 sock_completions: 22221 00:29:00.475 nvme_completions: 3139 00:29:00.475 submitted_requests: 4664 00:29:00.475 queued_requests: 1 00:29:00.475 00:29:00.475 ==================== 00:29:00.475 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:00.475 TCP transport: 00:29:00.475 polls: 30942 00:29:00.475 idle_polls: 9076 00:29:00.475 sock_completions: 21866 00:29:00.475 nvme_completions: 3857 00:29:00.475 submitted_requests: 5814 00:29:00.475 queued_requests: 1 00:29:00.475 ======================================================== 00:29:00.475 Latency(us) 00:29:00.475 Device Information : IOPS MiB/s Average min max 00:29:00.475 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 784.49 196.12 169524.20 93651.50 224608.06 00:29:00.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 963.99 241.00 135256.73 53902.98 205590.42 00:29:00.476 ======================================================== 00:29:00.476 Total : 1748.49 437.12 150631.53 53902.98 224608.06 00:29:00.476 00:29:00.476 02:16:05 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:00.476 02:16:05 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:00.476 02:16:05 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:00.476 02:16:05 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:00.476 02:16:05 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:03.764 02:16:08 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=d2e696b6-4d71-4ad3-851f-8aade4dbc039 00:29:03.764 02:16:08 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb d2e696b6-4d71-4ad3-851f-8aade4dbc039 00:29:03.764 02:16:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=d2e696b6-4d71-4ad3-851f-8aade4dbc039 00:29:03.764 02:16:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:03.764 02:16:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:03.764 02:16:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:03.764 02:16:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:03.764 02:16:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:03.764 { 00:29:03.764 "uuid": "d2e696b6-4d71-4ad3-851f-8aade4dbc039", 00:29:03.764 "name": "lvs_0", 00:29:03.764 "base_bdev": "Nvme0n1", 00:29:03.764 "total_data_clusters": 238234, 00:29:03.764 "free_clusters": 238234, 00:29:03.764 "block_size": 512, 00:29:03.764 "cluster_size": 4194304 00:29:03.764 } 00:29:03.764 ]' 00:29:03.764 02:16:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d2e696b6-4d71-4ad3-851f-8aade4dbc039") .free_clusters' 00:29:03.764 02:16:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:29:03.764 02:16:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d2e696b6-4d71-4ad3-851f-8aade4dbc039") .cluster_size' 00:29:03.764 02:16:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:03.764 02:16:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:29:03.764 02:16:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:29:03.764 952936 00:29:03.764 02:16:09 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:03.764 02:16:09 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:03.764 02:16:09 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d2e696b6-4d71-4ad3-851f-8aade4dbc039 lbd_0 20480 00:29:04.329 02:16:09 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=72dbcac9-c58c-46a4-890f-d184592224a6 00:29:04.329 02:16:09 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 72dbcac9-c58c-46a4-890f-d184592224a6 lvs_n_0 00:29:05.260 02:16:10 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=7c607f7c-944a-4217-9e50-04b7087bc57a 00:29:05.260 02:16:10 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 7c607f7c-944a-4217-9e50-04b7087bc57a 00:29:05.260 02:16:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=7c607f7c-944a-4217-9e50-04b7087bc57a 00:29:05.260 02:16:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:05.260 02:16:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:05.260 02:16:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:05.260 02:16:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:05.518 02:16:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:05.518 { 00:29:05.518 "uuid": "d2e696b6-4d71-4ad3-851f-8aade4dbc039", 00:29:05.518 "name": "lvs_0", 00:29:05.518 "base_bdev": "Nvme0n1", 00:29:05.518 "total_data_clusters": 238234, 00:29:05.518 "free_clusters": 233114, 00:29:05.518 "block_size": 512, 00:29:05.518 "cluster_size": 4194304 00:29:05.518 }, 00:29:05.518 { 00:29:05.518 "uuid": "7c607f7c-944a-4217-9e50-04b7087bc57a", 00:29:05.518 "name": "lvs_n_0", 00:29:05.518 "base_bdev": "72dbcac9-c58c-46a4-890f-d184592224a6", 00:29:05.518 "total_data_clusters": 5114, 00:29:05.518 "free_clusters": 5114, 00:29:05.518 "block_size": 512, 00:29:05.518 "cluster_size": 4194304 00:29:05.518 } 00:29:05.518 ]' 00:29:05.518 02:16:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="7c607f7c-944a-4217-9e50-04b7087bc57a") .free_clusters' 00:29:05.518 02:16:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:29:05.518 02:16:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="7c607f7c-944a-4217-9e50-04b7087bc57a") .cluster_size' 00:29:05.518 02:16:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:05.518 02:16:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:29:05.518 02:16:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:29:05.518 20456 00:29:05.518 02:16:11 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:05.518 02:16:11 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7c607f7c-944a-4217-9e50-04b7087bc57a lbd_nest_0 20456 00:29:05.776 02:16:11 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=e39e1967-a6d6-437a-b556-e63aeb97f44f 00:29:05.776 02:16:11 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:06.033 02:16:11 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:06.033 02:16:11 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 e39e1967-a6d6-437a-b556-e63aeb97f44f 00:29:06.291 02:16:11 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.548 02:16:12 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:06.548 02:16:12 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:06.548 02:16:12 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:06.548 02:16:12 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:06.548 02:16:12 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:06.548 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.766 Initializing NVMe Controllers 00:29:18.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:18.766 Initialization complete. Launching workers. 00:29:18.766 ======================================================== 00:29:18.766 Latency(us) 00:29:18.766 Device Information : IOPS MiB/s Average min max 00:29:18.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.30 0.02 20707.01 235.58 46820.83 00:29:18.766 ======================================================== 00:29:18.766 Total : 48.30 0.02 20707.01 235.58 46820.83 00:29:18.766 00:29:18.766 02:16:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:18.766 02:16:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.766 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.736 Initializing NVMe Controllers 00:29:28.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:28.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:28.736 Initialization complete. Launching workers. 00:29:28.736 ======================================================== 00:29:28.736 Latency(us) 00:29:28.736 Device Information : IOPS MiB/s Average min max 00:29:28.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.90 9.99 12515.78 5004.93 50865.81 00:29:28.736 ======================================================== 00:29:28.736 Total : 79.90 9.99 12515.78 5004.93 50865.81 00:29:28.736 00:29:28.736 02:16:32 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:28.736 02:16:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:28.736 02:16:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:28.736 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.693 Initializing NVMe Controllers 00:29:38.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:38.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:38.693 Initialization complete. Launching workers. 00:29:38.693 ======================================================== 00:29:38.693 Latency(us) 00:29:38.693 Device Information : IOPS MiB/s Average min max 00:29:38.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6949.90 3.39 4604.08 308.97 12115.52 00:29:38.693 ======================================================== 00:29:38.693 Total : 6949.90 3.39 4604.08 308.97 12115.52 00:29:38.693 00:29:38.694 02:16:42 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:38.694 02:16:42 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.694 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.655 Initializing NVMe Controllers 00:29:48.655 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:48.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:48.655 Initialization complete. Launching workers. 00:29:48.655 ======================================================== 00:29:48.655 Latency(us) 00:29:48.656 Device Information : IOPS MiB/s Average min max 00:29:48.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1772.13 221.52 18060.55 1242.09 41233.76 00:29:48.656 ======================================================== 00:29:48.656 Total : 1772.13 221.52 18060.55 1242.09 41233.76 00:29:48.656 00:29:48.656 02:16:53 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:48.656 02:16:53 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:48.656 02:16:53 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:48.656 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.667 Initializing NVMe Controllers 00:29:58.667 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:58.667 Controller IO queue size 128, less than required. 00:29:58.667 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:58.667 Initialization complete. Launching workers. 00:29:58.667 ======================================================== 00:29:58.667 Latency(us) 00:29:58.667 Device Information : IOPS MiB/s Average min max 00:29:58.667 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11907.15 5.81 10752.50 1670.51 25720.72 00:29:58.667 ======================================================== 00:29:58.667 Total : 11907.15 5.81 10752.50 1670.51 25720.72 00:29:58.667 00:29:58.667 02:17:03 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:58.667 02:17:03 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:58.667 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.639 Initializing NVMe Controllers 00:30:08.639 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:08.639 Controller IO queue size 128, less than required. 00:30:08.639 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:08.639 Initialization complete. Launching workers. 00:30:08.639 ======================================================== 00:30:08.639 Latency(us) 00:30:08.639 Device Information : IOPS MiB/s Average min max 00:30:08.639 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1203.50 150.44 107085.85 22888.60 215042.56 00:30:08.639 ======================================================== 00:30:08.639 Total : 1203.50 150.44 107085.85 22888.60 215042.56 00:30:08.639 00:30:08.639 02:17:14 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:08.897 02:17:14 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e39e1967-a6d6-437a-b556-e63aeb97f44f 00:30:09.832 02:17:15 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:09.832 02:17:15 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 72dbcac9-c58c-46a4-890f-d184592224a6 00:30:10.090 02:17:15 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:10.350 02:17:15 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:10.350 02:17:15 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:10.350 02:17:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:10.350 02:17:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:10.350 02:17:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:10.350 02:17:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:10.350 02:17:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:10.350 02:17:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:10.350 rmmod nvme_tcp 00:30:10.350 rmmod nvme_fabrics 00:30:10.350 rmmod nvme_keyring 00:30:10.350 02:17:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:10.350 02:17:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:10.350 02:17:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:10.350 02:17:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1685467 ']' 00:30:10.350 02:17:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1685467 00:30:10.350 02:17:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1685467 ']' 00:30:10.351 02:17:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1685467 00:30:10.351 02:17:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:30:10.351 02:17:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:10.351 02:17:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1685467 00:30:10.351 02:17:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:10.351 02:17:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:10.351 02:17:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1685467' 00:30:10.351 killing process with pid 1685467 00:30:10.351 02:17:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1685467 00:30:10.351 02:17:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1685467 00:30:12.255 02:17:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:12.255 02:17:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:12.255 02:17:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:12.255 02:17:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:12.255 02:17:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:12.255 02:17:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.255 02:17:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:12.255 02:17:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.162 02:17:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:14.162 00:30:14.162 real 1m31.125s 00:30:14.162 user 5m33.571s 00:30:14.162 sys 0m16.972s 00:30:14.162 02:17:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:14.162 02:17:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:14.162 ************************************ 00:30:14.162 END TEST nvmf_perf 00:30:14.162 ************************************ 00:30:14.162 02:17:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:14.162 02:17:19 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:14.162 02:17:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:14.162 02:17:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:14.162 02:17:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:14.162 ************************************ 00:30:14.162 START TEST nvmf_fio_host 00:30:14.162 ************************************ 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:14.162 * Looking for test storage... 00:30:14.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.162 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:14.163 02:17:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:16.064 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:16.064 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:16.064 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:16.064 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:16.064 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.065 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:16.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:30:16.325 00:30:16.325 --- 10.0.0.2 ping statistics --- 00:30:16.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.325 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:30:16.325 00:30:16.325 --- 10.0.0.1 ping statistics --- 00:30:16.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.325 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1697444 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1697444 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1697444 ']' 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:16.325 02:17:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.325 [2024-07-14 02:17:21.916637] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:30:16.325 [2024-07-14 02:17:21.916712] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.325 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.325 [2024-07-14 02:17:21.984281] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:16.584 [2024-07-14 02:17:22.074255] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.584 [2024-07-14 02:17:22.074318] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.584 [2024-07-14 02:17:22.074347] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.584 [2024-07-14 02:17:22.074359] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.584 [2024-07-14 02:17:22.074368] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.584 [2024-07-14 02:17:22.074504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.584 [2024-07-14 02:17:22.074570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.584 [2024-07-14 02:17:22.074619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:16.584 [2024-07-14 02:17:22.074621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.584 02:17:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:16.584 02:17:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:30:16.584 02:17:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:16.842 [2024-07-14 02:17:22.436205] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.842 02:17:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:16.842 02:17:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:16.842 02:17:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.842 02:17:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:17.101 Malloc1 00:30:17.360 02:17:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.618 02:17:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:17.618 02:17:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.876 [2024-07-14 02:17:23.524105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.876 02:17:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:18.133 02:17:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:18.392 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:18.392 fio-3.35 00:30:18.392 Starting 1 thread 00:30:18.392 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.993 00:30:20.993 test: (groupid=0, jobs=1): err= 0: pid=1697807: Sun Jul 14 02:17:26 2024 00:30:20.993 read: IOPS=9037, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2006msec) 00:30:20.993 slat (nsec): min=1899, max=143737, avg=2487.99, stdev=1715.13 00:30:20.993 clat (usec): min=3287, max=13502, avg=7809.90, stdev=573.30 00:30:20.993 lat (usec): min=3316, max=13505, avg=7812.39, stdev=573.20 00:30:20.993 clat percentiles (usec): 00:30:20.993 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:30:20.993 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:30:20.993 | 70.00th=[ 8094], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8717], 00:30:20.993 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11731], 99.95th=[12518], 00:30:20.993 | 99.99th=[13435] 00:30:20.993 bw ( KiB/s): min=34816, max=36984, per=99.92%, avg=36120.00, stdev=925.68, samples=4 00:30:20.993 iops : min= 8704, max= 9246, avg=9030.00, stdev=231.42, samples=4 00:30:20.993 write: IOPS=9055, BW=35.4MiB/s (37.1MB/s)(71.0MiB/2006msec); 0 zone resets 00:30:20.993 slat (usec): min=2, max=143, avg= 2.66, stdev= 1.52 00:30:20.993 clat (usec): min=1396, max=12475, avg=6249.96, stdev=509.97 00:30:20.993 lat (usec): min=1405, max=12478, avg=6252.63, stdev=509.92 00:30:20.994 clat percentiles (usec): 00:30:20.994 | 1.00th=[ 5145], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:30:20.994 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6390], 00:30:20.994 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 6980], 00:30:20.994 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[10421], 99.95th=[11731], 00:30:20.994 | 99.99th=[12518] 00:30:20.994 bw ( KiB/s): min=35664, max=36608, per=99.98%, avg=36214.00, stdev=424.50, samples=4 00:30:20.994 iops : min= 8916, max= 9152, avg=9053.50, stdev=106.12, samples=4 00:30:20.994 lat (msec) : 2=0.01%, 4=0.08%, 10=99.78%, 20=0.13% 00:30:20.994 cpu : usr=53.22%, sys=38.85%, ctx=62, majf=0, minf=7 00:30:20.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:20.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:20.994 issued rwts: total=18129,18165,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:20.994 00:30:20.994 Run status group 0 (all jobs): 00:30:20.994 READ: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.3MB), run=2006-2006msec 00:30:20.994 WRITE: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.0MiB (74.4MB), run=2006-2006msec 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:20.994 02:17:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:21.252 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:21.252 fio-3.35 00:30:21.252 Starting 1 thread 00:30:21.252 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.779 00:30:23.779 test: (groupid=0, jobs=1): err= 0: pid=1698168: Sun Jul 14 02:17:29 2024 00:30:23.779 read: IOPS=6893, BW=108MiB/s (113MB/s)(216MiB/2003msec) 00:30:23.779 slat (nsec): min=2831, max=96821, avg=3730.09, stdev=1596.91 00:30:23.779 clat (usec): min=3000, max=23803, avg=10866.87, stdev=3118.84 00:30:23.779 lat (usec): min=3004, max=23806, avg=10870.60, stdev=3118.87 00:30:23.779 clat percentiles (usec): 00:30:23.779 | 1.00th=[ 4621], 5.00th=[ 5932], 10.00th=[ 6849], 20.00th=[ 8225], 00:30:23.779 | 30.00th=[ 9110], 40.00th=[10028], 50.00th=[10814], 60.00th=[11469], 00:30:23.779 | 70.00th=[12256], 80.00th=[13304], 90.00th=[14877], 95.00th=[16319], 00:30:23.779 | 99.00th=[19006], 99.50th=[21103], 99.90th=[21627], 99.95th=[22676], 00:30:23.779 | 99.99th=[23725] 00:30:23.779 bw ( KiB/s): min=45728, max=61952, per=49.21%, avg=54280.00, stdev=7621.53, samples=4 00:30:23.779 iops : min= 2858, max= 3872, avg=3392.50, stdev=476.35, samples=4 00:30:23.779 write: IOPS=3933, BW=61.5MiB/s (64.4MB/s)(112MiB/1817msec); 0 zone resets 00:30:23.779 slat (usec): min=30, max=158, avg=32.98, stdev= 4.01 00:30:23.779 clat (usec): min=3031, max=29454, avg=14044.88, stdev=4292.41 00:30:23.779 lat (usec): min=3065, max=29486, avg=14077.86, stdev=4292.68 00:30:23.779 clat percentiles (usec): 00:30:23.779 | 1.00th=[ 7767], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[ 9896], 00:30:23.779 | 30.00th=[10683], 40.00th=[11469], 50.00th=[13042], 60.00th=[15270], 00:30:23.779 | 70.00th=[17171], 80.00th=[18482], 90.00th=[20055], 95.00th=[21365], 00:30:23.779 | 99.00th=[23200], 99.50th=[23725], 99.90th=[28967], 99.95th=[29230], 00:30:23.779 | 99.99th=[29492] 00:30:23.779 bw ( KiB/s): min=48864, max=64544, per=90.11%, avg=56712.00, stdev=7997.75, samples=4 00:30:23.779 iops : min= 3054, max= 4034, avg=3544.50, stdev=499.86, samples=4 00:30:23.779 lat (msec) : 4=0.23%, 10=33.48%, 20=62.40%, 50=3.89% 00:30:23.779 cpu : usr=68.35%, sys=25.16%, ctx=39, majf=0, minf=3 00:30:23.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:30:23.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:23.779 issued rwts: total=13808,7147,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:23.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:23.779 00:30:23.779 Run status group 0 (all jobs): 00:30:23.779 READ: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=216MiB (226MB), run=2003-2003msec 00:30:23.779 WRITE: bw=61.5MiB/s (64.4MB/s), 61.5MiB/s-61.5MiB/s (64.4MB/s-64.4MB/s), io=112MiB (117MB), run=1817-1817msec 00:30:23.779 02:17:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:23.779 02:17:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:23.779 02:17:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:23.779 02:17:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:23.779 02:17:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:23.779 02:17:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:30:23.779 02:17:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:23.779 02:17:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:23.779 02:17:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:23.779 02:17:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:23.779 02:17:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:30:23.779 02:17:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:27.056 Nvme0n1 00:30:27.056 02:17:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:30.332 02:17:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=4125b185-da5b-462d-82a4-ca444771110d 00:30:30.332 02:17:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 4125b185-da5b-462d-82a4-ca444771110d 00:30:30.332 02:17:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=4125b185-da5b-462d-82a4-ca444771110d 00:30:30.332 02:17:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:30.332 02:17:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:30.332 02:17:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:30.332 02:17:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:30.332 02:17:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:30.332 { 00:30:30.332 "uuid": "4125b185-da5b-462d-82a4-ca444771110d", 00:30:30.332 "name": "lvs_0", 00:30:30.332 "base_bdev": "Nvme0n1", 00:30:30.332 "total_data_clusters": 930, 00:30:30.332 "free_clusters": 930, 00:30:30.332 "block_size": 512, 00:30:30.332 "cluster_size": 1073741824 00:30:30.332 } 00:30:30.332 ]' 00:30:30.332 02:17:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="4125b185-da5b-462d-82a4-ca444771110d") .free_clusters' 00:30:30.332 02:17:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:30:30.332 02:17:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="4125b185-da5b-462d-82a4-ca444771110d") .cluster_size' 00:30:30.332 02:17:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:30:30.332 02:17:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:30:30.332 02:17:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:30:30.332 952320 00:30:30.332 02:17:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:30.590 f079d515-fdb3-4ec9-949c-dc9d89f76e93 00:30:30.590 02:17:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:30.848 02:17:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:31.106 02:17:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:31.106 02:17:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:31.106 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:31.106 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:31.106 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:31.106 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:31.106 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:31.106 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:31.106 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:31.106 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.364 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:31.364 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:31.364 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:31.364 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:31.364 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:31.364 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.364 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:31.364 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:31.364 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:31.364 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:31.364 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:31.364 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:31.364 02:17:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:31.364 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:31.364 fio-3.35 00:30:31.364 Starting 1 thread 00:30:31.364 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.890 00:30:33.890 test: (groupid=0, jobs=1): err= 0: pid=1699538: Sun Jul 14 02:17:39 2024 00:30:33.890 read: IOPS=6002, BW=23.4MiB/s (24.6MB/s)(47.1MiB/2007msec) 00:30:33.890 slat (usec): min=2, max=147, avg= 2.74, stdev= 2.28 00:30:33.890 clat (usec): min=851, max=171351, avg=11787.14, stdev=11635.19 00:30:33.890 lat (usec): min=854, max=171387, avg=11789.89, stdev=11635.44 00:30:33.890 clat percentiles (msec): 00:30:33.890 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:33.890 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:30:33.890 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:30:33.890 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:33.890 | 99.99th=[ 171] 00:30:33.890 bw ( KiB/s): min=16880, max=26408, per=99.74%, avg=23950.00, stdev=4714.53, samples=4 00:30:33.890 iops : min= 4220, max= 6602, avg=5987.50, stdev=1178.63, samples=4 00:30:33.890 write: IOPS=5984, BW=23.4MiB/s (24.5MB/s)(46.9MiB/2007msec); 0 zone resets 00:30:33.890 slat (usec): min=2, max=104, avg= 2.84, stdev= 1.76 00:30:33.890 clat (usec): min=395, max=169651, avg=9426.32, stdev=10950.89 00:30:33.890 lat (usec): min=398, max=169656, avg=9429.16, stdev=10951.11 00:30:33.890 clat percentiles (msec): 00:30:33.890 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:30:33.890 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:33.890 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:30:33.890 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 169], 00:30:33.890 | 99.99th=[ 169] 00:30:33.890 bw ( KiB/s): min=17896, max=25992, per=99.90%, avg=23914.00, stdev=4012.80, samples=4 00:30:33.890 iops : min= 4474, max= 6498, avg=5978.50, stdev=1003.20, samples=4 00:30:33.890 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:33.890 lat (msec) : 2=0.03%, 4=0.12%, 10=54.46%, 20=44.84%, 250=0.53% 00:30:33.890 cpu : usr=54.04%, sys=40.98%, ctx=102, majf=0, minf=25 00:30:33.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:33.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:33.890 issued rwts: total=12048,12011,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:33.890 00:30:33.890 Run status group 0 (all jobs): 00:30:33.890 READ: bw=23.4MiB/s (24.6MB/s), 23.4MiB/s-23.4MiB/s (24.6MB/s-24.6MB/s), io=47.1MiB (49.3MB), run=2007-2007msec 00:30:33.890 WRITE: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=46.9MiB (49.2MB), run=2007-2007msec 00:30:33.890 02:17:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:34.148 02:17:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:35.518 02:17:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=494a7c24-610f-49ad-9a3b-c59b89bf8787 00:30:35.518 02:17:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 494a7c24-610f-49ad-9a3b-c59b89bf8787 00:30:35.518 02:17:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=494a7c24-610f-49ad-9a3b-c59b89bf8787 00:30:35.518 02:17:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:35.518 02:17:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:35.518 02:17:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:35.518 02:17:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:35.518 02:17:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:35.518 { 00:30:35.518 "uuid": "4125b185-da5b-462d-82a4-ca444771110d", 00:30:35.518 "name": "lvs_0", 00:30:35.518 "base_bdev": "Nvme0n1", 00:30:35.518 "total_data_clusters": 930, 00:30:35.518 "free_clusters": 0, 00:30:35.518 "block_size": 512, 00:30:35.518 "cluster_size": 1073741824 00:30:35.518 }, 00:30:35.518 { 00:30:35.518 "uuid": "494a7c24-610f-49ad-9a3b-c59b89bf8787", 00:30:35.518 "name": "lvs_n_0", 00:30:35.518 "base_bdev": "f079d515-fdb3-4ec9-949c-dc9d89f76e93", 00:30:35.518 "total_data_clusters": 237847, 00:30:35.518 "free_clusters": 237847, 00:30:35.518 "block_size": 512, 00:30:35.518 "cluster_size": 4194304 00:30:35.518 } 00:30:35.518 ]' 00:30:35.518 02:17:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="494a7c24-610f-49ad-9a3b-c59b89bf8787") .free_clusters' 00:30:35.518 02:17:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:30:35.518 02:17:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="494a7c24-610f-49ad-9a3b-c59b89bf8787") .cluster_size' 00:30:35.518 02:17:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:35.518 02:17:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:30:35.518 02:17:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:30:35.518 951388 00:30:35.518 02:17:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:36.082 949b19e6-c594-490e-8d59-7362ba65654a 00:30:36.339 02:17:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:36.339 02:17:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:36.597 02:17:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:36.855 02:17:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:36.855 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:36.855 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:36.855 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:36.855 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:36.855 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:36.855 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:36.855 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:36.855 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:36.855 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:36.855 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:36.855 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:36.855 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:36.855 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:36.856 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:36.856 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:36.856 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:36.856 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:36.856 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:36.856 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:36.856 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:36.856 02:17:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:37.112 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:37.112 fio-3.35 00:30:37.112 Starting 1 thread 00:30:37.112 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.671 00:30:39.671 test: (groupid=0, jobs=1): err= 0: pid=1700275: Sun Jul 14 02:17:45 2024 00:30:39.671 read: IOPS=5794, BW=22.6MiB/s (23.7MB/s)(45.5MiB/2008msec) 00:30:39.671 slat (usec): min=2, max=145, avg= 2.67, stdev= 1.96 00:30:39.671 clat (usec): min=4439, max=20738, avg=12255.19, stdev=1000.10 00:30:39.671 lat (usec): min=4444, max=20741, avg=12257.86, stdev=999.99 00:30:39.671 clat percentiles (usec): 00:30:39.671 | 1.00th=[10028], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:30:39.671 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:30:39.671 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[13829], 00:30:39.671 | 99.00th=[14484], 99.50th=[14746], 99.90th=[19268], 99.95th=[19530], 00:30:39.671 | 99.99th=[20579] 00:30:39.671 bw ( KiB/s): min=21952, max=23696, per=99.74%, avg=23118.00, stdev=789.92, samples=4 00:30:39.671 iops : min= 5488, max= 5924, avg=5779.50, stdev=197.48, samples=4 00:30:39.671 write: IOPS=5775, BW=22.6MiB/s (23.7MB/s)(45.3MiB/2008msec); 0 zone resets 00:30:39.671 slat (usec): min=2, max=104, avg= 2.77, stdev= 1.43 00:30:39.671 clat (usec): min=2192, max=17641, avg=9723.13, stdev=878.49 00:30:39.671 lat (usec): min=2197, max=17644, avg=9725.90, stdev=878.45 00:30:39.671 clat percentiles (usec): 00:30:39.671 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:30:39.671 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:30:39.671 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:30:39.671 | 99.00th=[11731], 99.50th=[11994], 99.90th=[14877], 99.95th=[16188], 00:30:39.671 | 99.99th=[17433] 00:30:39.671 bw ( KiB/s): min=23000, max=23168, per=99.97%, avg=23094.00, stdev=86.99, samples=4 00:30:39.671 iops : min= 5750, max= 5792, avg=5773.50, stdev=21.75, samples=4 00:30:39.671 lat (msec) : 4=0.05%, 10=32.34%, 20=67.60%, 50=0.02% 00:30:39.671 cpu : usr=54.36%, sys=40.56%, ctx=119, majf=0, minf=25 00:30:39.671 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:39.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:39.671 issued rwts: total=11636,11597,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:39.671 00:30:39.671 Run status group 0 (all jobs): 00:30:39.671 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.5MiB (47.7MB), run=2008-2008msec 00:30:39.671 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.3MiB (47.5MB), run=2008-2008msec 00:30:39.672 02:17:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:39.931 02:17:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:39.931 02:17:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:44.154 02:17:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:44.154 02:17:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:46.686 02:17:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:46.945 02:17:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:49.478 rmmod nvme_tcp 00:30:49.478 rmmod nvme_fabrics 00:30:49.478 rmmod nvme_keyring 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1697444 ']' 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1697444 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1697444 ']' 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1697444 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1697444 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1697444' 00:30:49.478 killing process with pid 1697444 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1697444 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1697444 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:49.478 02:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.480 02:17:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:51.480 00:30:51.480 real 0m37.141s 00:30:51.480 user 2m21.688s 00:30:51.480 sys 0m7.240s 00:30:51.480 02:17:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:51.480 02:17:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.480 ************************************ 00:30:51.480 END TEST nvmf_fio_host 00:30:51.480 ************************************ 00:30:51.480 02:17:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:51.480 02:17:56 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:51.480 02:17:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:51.480 02:17:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:51.480 02:17:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:51.480 ************************************ 00:30:51.480 START TEST nvmf_failover 00:30:51.480 ************************************ 00:30:51.480 02:17:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:51.480 * Looking for test storage... 00:30:51.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:51.480 02:17:56 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:51.480 02:17:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:51.480 02:17:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:51.480 02:17:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:51.480 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:51.480 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:51.480 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:51.480 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:51.480 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:51.480 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:51.481 02:17:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:53.402 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:53.402 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:53.402 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:53.402 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:53.402 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:53.403 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:53.403 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:53.403 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:53.403 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:53.403 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:53.403 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:53.403 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:53.403 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:53.403 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:53.403 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:53.403 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:53.403 02:17:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:53.403 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:53.403 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:53.403 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:53.403 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:53.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:53.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:30:53.662 00:30:53.662 --- 10.0.0.2 ping statistics --- 00:30:53.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.662 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:53.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:53.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:30:53.662 00:30:53.662 --- 10.0.0.1 ping statistics --- 00:30:53.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.662 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1703520 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1703520 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1703520 ']' 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:53.662 02:17:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:53.662 [2024-07-14 02:17:59.210158] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:30:53.662 [2024-07-14 02:17:59.210250] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:53.662 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.662 [2024-07-14 02:17:59.280311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:53.921 [2024-07-14 02:17:59.370377] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:53.921 [2024-07-14 02:17:59.370439] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:53.921 [2024-07-14 02:17:59.370456] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:53.921 [2024-07-14 02:17:59.370469] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:53.921 [2024-07-14 02:17:59.370482] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:53.921 [2024-07-14 02:17:59.370569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:53.921 [2024-07-14 02:17:59.370685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:53.921 [2024-07-14 02:17:59.370688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.921 02:17:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:53.921 02:17:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:53.921 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:53.921 02:17:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:53.921 02:17:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:53.921 02:17:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:53.921 02:17:59 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:54.179 [2024-07-14 02:17:59.749386] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:54.179 02:17:59 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:54.438 Malloc0 00:30:54.438 02:18:00 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:54.697 02:18:00 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:54.955 02:18:00 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.214 [2024-07-14 02:18:00.838443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.214 02:18:00 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:55.472 [2024-07-14 02:18:01.103210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:55.472 02:18:01 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:55.730 [2024-07-14 02:18:01.352037] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:55.730 02:18:01 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1703899 00:30:55.730 02:18:01 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:55.730 02:18:01 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:55.730 02:18:01 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1703899 /var/tmp/bdevperf.sock 00:30:55.730 02:18:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1703899 ']' 00:30:55.730 02:18:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:55.730 02:18:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:55.730 02:18:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:55.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:55.730 02:18:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:55.730 02:18:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:55.988 02:18:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:55.988 02:18:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:55.988 02:18:01 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:56.556 NVMe0n1 00:30:56.556 02:18:02 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:56.814 00:30:56.815 02:18:02 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1704054 00:30:56.815 02:18:02 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:56.815 02:18:02 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:57.749 02:18:03 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.007 [2024-07-14 02:18:03.634312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.007 [2024-07-14 02:18:03.634421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.007 [2024-07-14 02:18:03.634454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.007 [2024-07-14 02:18:03.634468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.007 [2024-07-14 02:18:03.634481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.007 [2024-07-14 02:18:03.634493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.007 [2024-07-14 02:18:03.634505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 [2024-07-14 02:18:03.634517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 [2024-07-14 02:18:03.634530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 [2024-07-14 02:18:03.634542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 [2024-07-14 02:18:03.634554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 [2024-07-14 02:18:03.634567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 [2024-07-14 02:18:03.634579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 [2024-07-14 02:18:03.634590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 [2024-07-14 02:18:03.634602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 [2024-07-14 02:18:03.634624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 [2024-07-14 02:18:03.634637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 [2024-07-14 02:18:03.634658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 [2024-07-14 02:18:03.634669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 [2024-07-14 02:18:03.634681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 [2024-07-14 02:18:03.634717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 [2024-07-14 02:18:03.634729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 [2024-07-14 02:18:03.634740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca970 is same with the state(5) to be set 00:30:58.008 02:18:03 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:01.292 02:18:06 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:01.551 00:31:01.551 02:18:07 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:01.810 [2024-07-14 02:18:07.271696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.271767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.271782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.271794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.271808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.271820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.271832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.271844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.271856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.271893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.271908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.271930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.271943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.271956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.271968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.271981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.272003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.272016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.272029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.272042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.272054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.272067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.272079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.272106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.272119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.272131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.272143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.272167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.272180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.272207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 [2024-07-14 02:18:07.272220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcbed0 is same with the state(5) to be set 00:31:01.810 02:18:07 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:05.098 02:18:10 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:05.098 [2024-07-14 02:18:10.523959] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.098 02:18:10 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:06.038 02:18:11 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:06.297 [2024-07-14 02:18:11.798681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.798993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.799006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.799019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.799031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.799043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.799056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.799068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.799081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.799093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.799106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.799118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.799130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.799147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.297 [2024-07-14 02:18:11.799159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.298 [2024-07-14 02:18:11.799173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.298 [2024-07-14 02:18:11.799185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.298 [2024-07-14 02:18:11.799201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.298 [2024-07-14 02:18:11.799214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.298 [2024-07-14 02:18:11.799226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.298 [2024-07-14 02:18:11.799239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.298 [2024-07-14 02:18:11.799251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85fa0 is same with the state(5) to be set 00:31:06.298 02:18:11 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1704054 00:31:12.879 0 00:31:12.879 02:18:17 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1703899 00:31:12.879 02:18:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1703899 ']' 00:31:12.879 02:18:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1703899 00:31:12.879 02:18:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:12.879 02:18:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:12.879 02:18:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1703899 00:31:12.879 02:18:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:12.879 02:18:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:12.879 02:18:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1703899' 00:31:12.879 killing process with pid 1703899 00:31:12.879 02:18:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1703899 00:31:12.879 02:18:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1703899 00:31:12.879 02:18:17 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:12.879 [2024-07-14 02:18:01.415277] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:12.879 [2024-07-14 02:18:01.415365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1703899 ] 00:31:12.879 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.879 [2024-07-14 02:18:01.477762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.879 [2024-07-14 02:18:01.564970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.879 Running I/O for 15 seconds... 00:31:12.879 [2024-07-14 02:18:03.636291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.879 [2024-07-14 02:18:03.636345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.879 [2024-07-14 02:18:03.636379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.879 [2024-07-14 02:18:03.636396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.879 [2024-07-14 02:18:03.636414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.879 [2024-07-14 02:18:03.636429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.879 [2024-07-14 02:18:03.636445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.879 [2024-07-14 02:18:03.636459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.879 [2024-07-14 02:18:03.636475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.879 [2024-07-14 02:18:03.636489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.879 [2024-07-14 02:18:03.636505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.879 [2024-07-14 02:18:03.636523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.879 [2024-07-14 02:18:03.636546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.879 [2024-07-14 02:18:03.636561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.879 [2024-07-14 02:18:03.636577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.879 [2024-07-14 02:18:03.636591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.879 [2024-07-14 02:18:03.636607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.879 [2024-07-14 02:18:03.636622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.879 [2024-07-14 02:18:03.636637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.879 [2024-07-14 02:18:03.636651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.879 [2024-07-14 02:18:03.636667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.880 [2024-07-14 02:18:03.636680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.636703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.880 [2024-07-14 02:18:03.636718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.636735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.880 [2024-07-14 02:18:03.636757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.636774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.880 [2024-07-14 02:18:03.636788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.636803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.880 [2024-07-14 02:18:03.636817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.636832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.880 [2024-07-14 02:18:03.636846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.636861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.880 [2024-07-14 02:18:03.636892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.636921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.880 [2024-07-14 02:18:03.636935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.636950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.880 [2024-07-14 02:18:03.636964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.636979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.880 [2024-07-14 02:18:03.636992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.880 [2024-07-14 02:18:03.637021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.880 [2024-07-14 02:18:03.637050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.880 [2024-07-14 02:18:03.637078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.880 [2024-07-14 02:18:03.637123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.637974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.637987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.880 [2024-07-14 02:18:03.638002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.880 [2024-07-14 02:18:03.638015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.638974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.638988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.639004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.639027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.639043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.639057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.639071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.639085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.639100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.639117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.639132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.639146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.639160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.639183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.639198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.639212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.639235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.639252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.639268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.639282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.639297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.639310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.639325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.639338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.639353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.639367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.881 [2024-07-14 02:18:03.639381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.881 [2024-07-14 02:18:03.639395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.639410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.882 [2024-07-14 02:18:03.639423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.639445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.882 [2024-07-14 02:18:03.639463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.639478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.882 [2024-07-14 02:18:03.639491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.639507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.882 [2024-07-14 02:18:03.639525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.639540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.882 [2024-07-14 02:18:03.639553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.639568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.882 [2024-07-14 02:18:03.639581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.639614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.639632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76960 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.639648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.639676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.639688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.639700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76968 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.639712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.639725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.639736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.639748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76976 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.639761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.639774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.639784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.639796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76984 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.639808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.639821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.639831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.639842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76992 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.639854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.639883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.639898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.639910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77000 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.639924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.639938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.639952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.639964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77008 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.639976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.639989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.640000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.640011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77016 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.640024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.640037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.640047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.640058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77024 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.640071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.640090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.640104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.640116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77032 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.640128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.640142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.640152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.640164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77040 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.640176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.640189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.640202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.640214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77048 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.640227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.640239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.640250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.640262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77056 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.640274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.640287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.640303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.640317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77064 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.640330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.640348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.640359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.640370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77072 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.640383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.640396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.640407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.640418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77080 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.640431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.640444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.640454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.640466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77088 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.640479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.640492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.640507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.640522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77096 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.640535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.640549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.640561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.640573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77104 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.640586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.640599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.640610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.640621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77112 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.640635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.640647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.640658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.640670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77120 len:8 PRP1 0x0 PRP2 0x0 00:31:12.882 [2024-07-14 02:18:03.640682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.882 [2024-07-14 02:18:03.640696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.882 [2024-07-14 02:18:03.640708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.882 [2024-07-14 02:18:03.640725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77128 len:8 PRP1 0x0 PRP2 0x0 00:31:12.883 [2024-07-14 02:18:03.640742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:03.640757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.883 [2024-07-14 02:18:03.640768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.883 [2024-07-14 02:18:03.640779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77136 len:8 PRP1 0x0 PRP2 0x0 00:31:12.883 [2024-07-14 02:18:03.640792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:03.640805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.883 [2024-07-14 02:18:03.640816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.883 [2024-07-14 02:18:03.640827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77144 len:8 PRP1 0x0 PRP2 0x0 00:31:12.883 [2024-07-14 02:18:03.640840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:03.640853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.883 [2024-07-14 02:18:03.640863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.883 [2024-07-14 02:18:03.640889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77152 len:8 PRP1 0x0 PRP2 0x0 00:31:12.883 [2024-07-14 02:18:03.640904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:03.640922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.883 [2024-07-14 02:18:03.640933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.883 [2024-07-14 02:18:03.640944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77160 len:8 PRP1 0x0 PRP2 0x0 00:31:12.883 [2024-07-14 02:18:03.640956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:03.641015] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa09250 was disconnected and freed. reset controller. 00:31:12.883 [2024-07-14 02:18:03.641033] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:12.883 [2024-07-14 02:18:03.641068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.883 [2024-07-14 02:18:03.641092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:03.641110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.883 [2024-07-14 02:18:03.641123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:03.641137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.883 [2024-07-14 02:18:03.641149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:03.641163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.883 [2024-07-14 02:18:03.641175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:03.641198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.883 [2024-07-14 02:18:03.644521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.883 [2024-07-14 02:18:03.644565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e2bd0 (9): Bad file descriptor 00:31:12.883 [2024-07-14 02:18:03.808365] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:12.883 [2024-07-14 02:18:07.273101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.883 [2024-07-14 02:18:07.273144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:07.273198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.883 [2024-07-14 02:18:07.273223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:07.273239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.883 [2024-07-14 02:18:07.273252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:07.273267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.883 [2024-07-14 02:18:07.273281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:07.273297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.883 [2024-07-14 02:18:07.273310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:07.273325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.883 [2024-07-14 02:18:07.273338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:07.273353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.883 [2024-07-14 02:18:07.273366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:07.273381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.883 [2024-07-14 02:18:07.273393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:07.273408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.883 [2024-07-14 02:18:07.273421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:07.273446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.883 [2024-07-14 02:18:07.273459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:07.273474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.883 [2024-07-14 02:18:07.273487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:07.273501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.883 [2024-07-14 02:18:07.273519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:07.273535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.883 [2024-07-14 02:18:07.273548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:07.273563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.883 [2024-07-14 02:18:07.273577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:07.273593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.883 [2024-07-14 02:18:07.273607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:07.273623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.883 [2024-07-14 02:18:07.273636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.883 [2024-07-14 02:18:07.273651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.884 [2024-07-14 02:18:07.273666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.273682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.884 [2024-07-14 02:18:07.273696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.273711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.884 [2024-07-14 02:18:07.273725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.273739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.884 [2024-07-14 02:18:07.273752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.273767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.884 [2024-07-14 02:18:07.273779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.273794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.884 [2024-07-14 02:18:07.273807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.273821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.884 [2024-07-14 02:18:07.273834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.273862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.884 [2024-07-14 02:18:07.273886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.273907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.273938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.273954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.273968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.273983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.273998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.884 [2024-07-14 02:18:07.274966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.884 [2024-07-14 02:18:07.274981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.274994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.885 [2024-07-14 02:18:07.275344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.885 [2024-07-14 02:18:07.275373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.275974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.275990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.276004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.276019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.276032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.276047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.276061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.276077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.276101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.276117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.276130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.276145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.276158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.276173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.276200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.276218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.276236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.276250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.885 [2024-07-14 02:18:07.276263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.885 [2024-07-14 02:18:07.276292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.886 [2024-07-14 02:18:07.276305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.276320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.886 [2024-07-14 02:18:07.276333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.276349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.886 [2024-07-14 02:18:07.276363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.276394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.276411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108576 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.276425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.276443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.276454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.276466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108584 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.276478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.276492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.276503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.276515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108592 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.276527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.276540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.276551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.276562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108600 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.276574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.276597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.276608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.276620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108608 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.276637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.276650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.276661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.276673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108616 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.276685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.276697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.276708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.276718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108624 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.276731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.276744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.276754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.276765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108632 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.276777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.276789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.276800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.276810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108640 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.276822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.276835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.276845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.276856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108648 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.276891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.276906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.276921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.276932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108656 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.276945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.276958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.276969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.276980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108664 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.276993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.277012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.277024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.277038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108672 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.277051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.277065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.277076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.277087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108680 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.277100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.277113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.277123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.277134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108688 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.277147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.277160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.277170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.277197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108696 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.277209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.277222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.277232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.277243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108704 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.277255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.277268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.277278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.277289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108712 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.277301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.277314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.277324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.277335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108720 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.277358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.277370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.277381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.277391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108728 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.277403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.277424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.277436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.277447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108736 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.277459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.277472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.277482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.277493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108744 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.277505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.277518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.886 [2024-07-14 02:18:07.277528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.886 [2024-07-14 02:18:07.277539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108752 len:8 PRP1 0x0 PRP2 0x0 00:31:12.886 [2024-07-14 02:18:07.277551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.886 [2024-07-14 02:18:07.277613] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbad7f0 was disconnected and freed. reset controller. 00:31:12.886 [2024-07-14 02:18:07.277631] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:12.886 [2024-07-14 02:18:07.277663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.886 [2024-07-14 02:18:07.277698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:07.277713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.887 [2024-07-14 02:18:07.277726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:07.277740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.887 [2024-07-14 02:18:07.277753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:07.277767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.887 [2024-07-14 02:18:07.277780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:07.277793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.887 [2024-07-14 02:18:07.281108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.887 [2024-07-14 02:18:07.281160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e2bd0 (9): Bad file descriptor 00:31:12.887 [2024-07-14 02:18:07.353470] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:12.887 [2024-07-14 02:18:11.796833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.887 [2024-07-14 02:18:11.796921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.796940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.887 [2024-07-14 02:18:11.796964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.796978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.887 [2024-07-14 02:18:11.796991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.797005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.887 [2024-07-14 02:18:11.797020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.797033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e2bd0 is same with the state(5) to be set 00:31:12.887 [2024-07-14 02:18:11.800917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.887 [2024-07-14 02:18:11.800947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.800976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.887 [2024-07-14 02:18:11.800992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.887 [2024-07-14 02:18:11.801023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.887 [2024-07-14 02:18:11.801052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.887 [2024-07-14 02:18:11.801081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.887 [2024-07-14 02:18:11.801111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.887 [2024-07-14 02:18:11.801139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.887 [2024-07-14 02:18:11.801173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.887 [2024-07-14 02:18:11.801216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.887 [2024-07-14 02:18:11.801250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.887 [2024-07-14 02:18:11.801280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.887 [2024-07-14 02:18:11.801307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.887 [2024-07-14 02:18:11.801335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.887 [2024-07-14 02:18:11.801364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.887 [2024-07-14 02:18:11.801392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.887 [2024-07-14 02:18:11.801421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.887 [2024-07-14 02:18:11.801930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.887 [2024-07-14 02:18:11.801944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.801959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.888 [2024-07-14 02:18:11.801972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.801987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.888 [2024-07-14 02:18:11.802001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.888 [2024-07-14 02:18:11.802034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.888 [2024-07-14 02:18:11.802062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.888 [2024-07-14 02:18:11.802091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.888 [2024-07-14 02:18:11.802121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.888 [2024-07-14 02:18:11.802151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.888 [2024-07-14 02:18:11.802201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.888 [2024-07-14 02:18:11.802230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.888 [2024-07-14 02:18:11.802258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.888 [2024-07-14 02:18:11.802286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.888 [2024-07-14 02:18:11.802313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.888 [2024-07-14 02:18:11.802341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.888 [2024-07-14 02:18:11.802369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.888 [2024-07-14 02:18:11.802400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.802984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.802999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.803012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.803027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.803041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.803056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.888 [2024-07-14 02:18:11.803070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.888 [2024-07-14 02:18:11.803085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.889 [2024-07-14 02:18:11.803621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.803648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.803678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.803707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.803736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.803772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.803799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.803827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.803855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.803918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.803947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.803979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.803995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.804009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.804025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.804038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.804053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.804072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.804088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.804102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.804117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.804130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.804145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.804159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.804174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.804202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.804217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.804231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.804247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.804259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.804280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.804293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.804308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.804321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.804336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.804352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.804367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.804380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.804395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.889 [2024-07-14 02:18:11.804408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.889 [2024-07-14 02:18:11.804423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.890 [2024-07-14 02:18:11.804435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.890 [2024-07-14 02:18:11.804450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.890 [2024-07-14 02:18:11.804463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.890 [2024-07-14 02:18:11.804477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.890 [2024-07-14 02:18:11.804491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.890 [2024-07-14 02:18:11.804505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.890 [2024-07-14 02:18:11.804518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.890 [2024-07-14 02:18:11.804532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.890 [2024-07-14 02:18:11.804551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.890 [2024-07-14 02:18:11.804566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.890 [2024-07-14 02:18:11.804579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.890 [2024-07-14 02:18:11.804593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.890 [2024-07-14 02:18:11.804606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.890 [2024-07-14 02:18:11.804636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.890 [2024-07-14 02:18:11.804653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50704 len:8 PRP1 0x0 PRP2 0x0 00:31:12.890 [2024-07-14 02:18:11.804665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.890 [2024-07-14 02:18:11.804684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.890 [2024-07-14 02:18:11.804696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.890 [2024-07-14 02:18:11.804707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50712 len:8 PRP1 0x0 PRP2 0x0 00:31:12.890 [2024-07-14 02:18:11.804719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.890 [2024-07-14 02:18:11.804732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.890 [2024-07-14 02:18:11.804751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.890 [2024-07-14 02:18:11.804764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50720 len:8 PRP1 0x0 PRP2 0x0 00:31:12.890 [2024-07-14 02:18:11.804776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.890 [2024-07-14 02:18:11.804789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.890 [2024-07-14 02:18:11.804800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.890 [2024-07-14 02:18:11.804811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50728 len:8 PRP1 0x0 PRP2 0x0 00:31:12.890 [2024-07-14 02:18:11.804823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.890 [2024-07-14 02:18:11.804836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.890 [2024-07-14 02:18:11.804846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.890 [2024-07-14 02:18:11.804857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50736 len:8 PRP1 0x0 PRP2 0x0 00:31:12.890 [2024-07-14 02:18:11.804893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.890 [2024-07-14 02:18:11.804915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.890 [2024-07-14 02:18:11.804926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.890 [2024-07-14 02:18:11.804937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50744 len:8 PRP1 0x0 PRP2 0x0 00:31:12.890 [2024-07-14 02:18:11.804950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.890 [2024-07-14 02:18:11.804963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.890 [2024-07-14 02:18:11.804973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.890 [2024-07-14 02:18:11.804985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50752 len:8 PRP1 0x0 PRP2 0x0 00:31:12.890 [2024-07-14 02:18:11.804997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.890 [2024-07-14 02:18:11.805015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.890 [2024-07-14 02:18:11.805026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.890 [2024-07-14 02:18:11.805037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50760 len:8 PRP1 0x0 PRP2 0x0 00:31:12.890 [2024-07-14 02:18:11.805050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.890 [2024-07-14 02:18:11.805108] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbae2c0 was disconnected and freed. reset controller. 00:31:12.890 [2024-07-14 02:18:11.805127] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:12.890 [2024-07-14 02:18:11.805142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.890 [2024-07-14 02:18:11.808457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.890 [2024-07-14 02:18:11.808500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e2bd0 (9): Bad file descriptor 00:31:12.890 [2024-07-14 02:18:11.918599] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:12.890 00:31:12.890 Latency(us) 00:31:12.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.890 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:12.890 Verification LBA range: start 0x0 length 0x4000 00:31:12.890 NVMe0n1 : 15.00 8591.31 33.56 880.63 0.00 13486.34 488.49 16699.54 00:31:12.890 =================================================================================================================== 00:31:12.890 Total : 8591.31 33.56 880.63 0.00 13486.34 488.49 16699.54 00:31:12.890 Received shutdown signal, test time was about 15.000000 seconds 00:31:12.890 00:31:12.890 Latency(us) 00:31:12.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.890 =================================================================================================================== 00:31:12.890 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:12.890 02:18:17 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:12.890 02:18:17 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:12.890 02:18:17 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:12.890 02:18:17 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1706396 00:31:12.890 02:18:17 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:12.890 02:18:17 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1706396 /var/tmp/bdevperf.sock 00:31:12.890 02:18:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1706396 ']' 00:31:12.890 02:18:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:12.890 02:18:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:12.890 02:18:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:12.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:12.890 02:18:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:12.890 02:18:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:12.890 02:18:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:12.890 02:18:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:31:12.890 02:18:18 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:12.890 [2024-07-14 02:18:18.313553] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:12.890 02:18:18 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:12.890 [2024-07-14 02:18:18.554147] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:13.151 02:18:18 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:13.410 NVMe0n1 00:31:13.410 02:18:18 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:13.668 00:31:13.668 02:18:19 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:13.926 00:31:13.926 02:18:19 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:13.926 02:18:19 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:14.183 02:18:19 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:14.440 02:18:20 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:17.723 02:18:23 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:17.723 02:18:23 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:17.723 02:18:23 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1707060 00:31:17.723 02:18:23 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:17.723 02:18:23 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1707060 00:31:19.110 0 00:31:19.110 02:18:24 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:19.110 [2024-07-14 02:18:17.844795] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:19.110 [2024-07-14 02:18:17.844927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706396 ] 00:31:19.110 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.110 [2024-07-14 02:18:17.906256] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.110 [2024-07-14 02:18:17.988934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.110 [2024-07-14 02:18:20.056619] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:19.110 [2024-07-14 02:18:20.056756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.110 [2024-07-14 02:18:20.056781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.110 [2024-07-14 02:18:20.056802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.110 [2024-07-14 02:18:20.056816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.110 [2024-07-14 02:18:20.056831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.110 [2024-07-14 02:18:20.056861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.110 [2024-07-14 02:18:20.056884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.110 [2024-07-14 02:18:20.056899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.110 [2024-07-14 02:18:20.056914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:19.110 [2024-07-14 02:18:20.056982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:19.110 [2024-07-14 02:18:20.057020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1415bd0 (9): Bad file descriptor 00:31:19.110 [2024-07-14 02:18:20.067492] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:19.110 Running I/O for 1 seconds... 00:31:19.110 00:31:19.110 Latency(us) 00:31:19.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.110 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:19.110 Verification LBA range: start 0x0 length 0x4000 00:31:19.110 NVMe0n1 : 1.00 8761.90 34.23 0.00 0.00 14549.56 1371.40 15534.46 00:31:19.110 =================================================================================================================== 00:31:19.110 Total : 8761.90 34.23 0.00 0.00 14549.56 1371.40 15534.46 00:31:19.110 02:18:24 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:19.110 02:18:24 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:19.110 02:18:24 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:19.429 02:18:24 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:19.429 02:18:24 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:19.687 02:18:25 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:19.946 02:18:25 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:23.237 02:18:28 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:23.237 02:18:28 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:23.237 02:18:28 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1706396 00:31:23.237 02:18:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1706396 ']' 00:31:23.237 02:18:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1706396 00:31:23.237 02:18:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:23.237 02:18:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:23.237 02:18:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1706396 00:31:23.237 02:18:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:23.237 02:18:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:23.237 02:18:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1706396' 00:31:23.237 killing process with pid 1706396 00:31:23.237 02:18:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1706396 00:31:23.237 02:18:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1706396 00:31:23.237 02:18:28 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:23.237 02:18:28 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:23.496 02:18:29 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:23.496 02:18:29 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:23.755 02:18:29 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:23.756 rmmod nvme_tcp 00:31:23.756 rmmod nvme_fabrics 00:31:23.756 rmmod nvme_keyring 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1703520 ']' 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1703520 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1703520 ']' 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1703520 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1703520 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1703520' 00:31:23.756 killing process with pid 1703520 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1703520 00:31:23.756 02:18:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1703520 00:31:24.015 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:24.015 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:24.015 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:24.015 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:24.015 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:24.015 02:18:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.015 02:18:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:24.015 02:18:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.916 02:18:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:25.916 00:31:25.916 real 0m34.633s 00:31:25.916 user 2m1.717s 00:31:25.916 sys 0m5.856s 00:31:25.916 02:18:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:25.916 02:18:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:25.916 ************************************ 00:31:25.916 END TEST nvmf_failover 00:31:25.916 ************************************ 00:31:25.916 02:18:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:25.916 02:18:31 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:25.916 02:18:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:25.916 02:18:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:25.916 02:18:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:26.174 ************************************ 00:31:26.174 START TEST nvmf_host_discovery 00:31:26.174 ************************************ 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:26.174 * Looking for test storage... 00:31:26.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:26.174 02:18:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:28.069 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:28.070 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:28.070 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:28.070 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:28.070 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:28.070 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:28.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:31:28.327 00:31:28.327 --- 10.0.0.2 ping statistics --- 00:31:28.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.327 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:31:28.327 00:31:28.327 --- 10.0.0.1 ping statistics --- 00:31:28.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.327 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1709669 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1709669 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1709669 ']' 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:28.327 02:18:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.327 [2024-07-14 02:18:33.857857] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:28.327 [2024-07-14 02:18:33.857978] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.327 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.327 [2024-07-14 02:18:33.928408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.327 [2024-07-14 02:18:34.017864] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.327 [2024-07-14 02:18:34.017927] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.327 [2024-07-14 02:18:34.017944] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:28.327 [2024-07-14 02:18:34.017958] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:28.327 [2024-07-14 02:18:34.017971] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.327 [2024-07-14 02:18:34.018001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.585 [2024-07-14 02:18:34.162971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.585 [2024-07-14 02:18:34.171143] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.585 null0 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.585 null1 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1709702 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1709702 /tmp/host.sock 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1709702 ']' 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:28.585 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:28.585 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:28.586 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:28.586 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.586 [2024-07-14 02:18:34.244735] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:28.586 [2024-07-14 02:18:34.244804] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709702 ] 00:31:28.586 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.844 [2024-07-14 02:18:34.307234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.844 [2024-07-14 02:18:34.397563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.844 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:29.101 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.101 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:29.101 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:29.101 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.101 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.102 [2024-07-14 02:18:34.788782] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:29.102 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:29.360 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:31:29.361 02:18:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:29.925 [2024-07-14 02:18:35.582022] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:29.925 [2024-07-14 02:18:35.582048] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:29.925 [2024-07-14 02:18:35.582075] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:30.183 [2024-07-14 02:18:35.668376] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:30.183 [2024-07-14 02:18:35.854786] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:30.183 [2024-07-14 02:18:35.854816] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:30.442 02:18:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:30.442 02:18:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:30.442 02:18:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:30.442 02:18:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:30.442 02:18:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:30.442 02:18:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.442 02:18:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.442 02:18:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:30.442 02:18:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:30.442 02:18:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:30.442 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.700 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:30.700 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:30.700 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:30.700 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:30.700 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:30.700 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.700 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.700 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.701 [2024-07-14 02:18:36.232962] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:30.701 [2024-07-14 02:18:36.234140] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:30.701 [2024-07-14 02:18:36.234198] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:30.701 [2024-07-14 02:18:36.320457] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:30.701 02:18:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:30.701 [2024-07-14 02:18:36.385065] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:30.701 [2024-07-14 02:18:36.385087] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:30.701 [2024-07-14 02:18:36.385096] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.075 [2024-07-14 02:18:37.453520] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:32.075 [2024-07-14 02:18:37.453559] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:32.075 [2024-07-14 02:18:37.455556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.075 [2024-07-14 02:18:37.455591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.075 [2024-07-14 02:18:37.455609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.075 [2024-07-14 02:18:37.455624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.075 [2024-07-14 02:18:37.455639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.075 [2024-07-14 02:18:37.455654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.075 [2024-07-14 02:18:37.455670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.075 [2024-07-14 02:18:37.455685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.075 [2024-07-14 02:18:37.455699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf3640 is same with the state(5) to be set 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:32.075 [2024-07-14 02:18:37.465556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf3640 (9): Bad file descriptor 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.075 [2024-07-14 02:18:37.475604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.075 [2024-07-14 02:18:37.475890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.075 [2024-07-14 02:18:37.475944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf3640 with addr=10.0.0.2, port=4420 00:31:32.075 [2024-07-14 02:18:37.475962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf3640 is same with the state(5) to be set 00:31:32.075 [2024-07-14 02:18:37.475985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf3640 (9): Bad file descriptor 00:31:32.075 [2024-07-14 02:18:37.476006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.075 [2024-07-14 02:18:37.476021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.075 [2024-07-14 02:18:37.476052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.075 [2024-07-14 02:18:37.476072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.075 [2024-07-14 02:18:37.485691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.075 [2024-07-14 02:18:37.485953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.075 [2024-07-14 02:18:37.485982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf3640 with addr=10.0.0.2, port=4420 00:31:32.075 [2024-07-14 02:18:37.485999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf3640 is same with the state(5) to be set 00:31:32.075 [2024-07-14 02:18:37.486020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf3640 (9): Bad file descriptor 00:31:32.075 [2024-07-14 02:18:37.486041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.075 [2024-07-14 02:18:37.486054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.075 [2024-07-14 02:18:37.486068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.075 [2024-07-14 02:18:37.486087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.075 [2024-07-14 02:18:37.495771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.075 [2024-07-14 02:18:37.496020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.075 [2024-07-14 02:18:37.496050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf3640 with addr=10.0.0.2, port=4420 00:31:32.075 [2024-07-14 02:18:37.496067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf3640 is same with the state(5) to be set 00:31:32.075 [2024-07-14 02:18:37.496089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf3640 (9): Bad file descriptor 00:31:32.075 [2024-07-14 02:18:37.496110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.075 [2024-07-14 02:18:37.496124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.075 [2024-07-14 02:18:37.496137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.075 [2024-07-14 02:18:37.496183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:32.075 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:32.075 [2024-07-14 02:18:37.505851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.075 [2024-07-14 02:18:37.506125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.075 [2024-07-14 02:18:37.506176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf3640 with addr=10.0.0.2, port=4420 00:31:32.075 [2024-07-14 02:18:37.506194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf3640 is same with the state(5) to be set 00:31:32.075 [2024-07-14 02:18:37.506219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf3640 (9): Bad file descriptor 00:31:32.075 [2024-07-14 02:18:37.506243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.075 [2024-07-14 02:18:37.506258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.075 [2024-07-14 02:18:37.506273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.075 [2024-07-14 02:18:37.506294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.075 [2024-07-14 02:18:37.515944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.075 [2024-07-14 02:18:37.516182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.075 [2024-07-14 02:18:37.516214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf3640 with addr=10.0.0.2, port=4420 00:31:32.075 [2024-07-14 02:18:37.516239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf3640 is same with the state(5) to be set 00:31:32.076 [2024-07-14 02:18:37.516264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf3640 (9): Bad file descriptor 00:31:32.076 [2024-07-14 02:18:37.516302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.076 [2024-07-14 02:18:37.516323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.076 [2024-07-14 02:18:37.516339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.076 [2024-07-14 02:18:37.516360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.076 [2024-07-14 02:18:37.526018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.076 [2024-07-14 02:18:37.526261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.076 [2024-07-14 02:18:37.526292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf3640 with addr=10.0.0.2, port=4420 00:31:32.076 [2024-07-14 02:18:37.526310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf3640 is same with the state(5) to be set 00:31:32.076 [2024-07-14 02:18:37.526335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf3640 (9): Bad file descriptor 00:31:32.076 [2024-07-14 02:18:37.526371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.076 [2024-07-14 02:18:37.526397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.076 [2024-07-14 02:18:37.526413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.076 [2024-07-14 02:18:37.526435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.076 [2024-07-14 02:18:37.536087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.076 [2024-07-14 02:18:37.536324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.076 [2024-07-14 02:18:37.536352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf3640 with addr=10.0.0.2, port=4420 00:31:32.076 [2024-07-14 02:18:37.536368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf3640 is same with the state(5) to be set 00:31:32.076 [2024-07-14 02:18:37.536390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf3640 (9): Bad file descriptor 00:31:32.076 [2024-07-14 02:18:37.536423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.076 [2024-07-14 02:18:37.536441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.076 [2024-07-14 02:18:37.536455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.076 [2024-07-14 02:18:37.536506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.076 [2024-07-14 02:18:37.540430] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:32.076 [2024-07-14 02:18:37.540465] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.076 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.334 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:32.334 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:32.334 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:32.334 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:32.334 02:18:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:32.334 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.334 02:18:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.267 [2024-07-14 02:18:38.841133] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:33.267 [2024-07-14 02:18:38.841190] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:33.267 [2024-07-14 02:18:38.841215] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:33.267 [2024-07-14 02:18:38.927478] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:33.524 [2024-07-14 02:18:39.195515] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:33.524 [2024-07-14 02:18:39.195577] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:33.524 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.524 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:33.524 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:33.524 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:33.524 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:33.524 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:33.524 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:33.524 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:33.524 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:33.525 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.525 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.525 request: 00:31:33.525 { 00:31:33.525 "name": "nvme", 00:31:33.525 "trtype": "tcp", 00:31:33.525 "traddr": "10.0.0.2", 00:31:33.525 "adrfam": "ipv4", 00:31:33.525 "trsvcid": "8009", 00:31:33.525 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:33.525 "wait_for_attach": true, 00:31:33.525 "method": "bdev_nvme_start_discovery", 00:31:33.525 "req_id": 1 00:31:33.525 } 00:31:33.525 Got JSON-RPC error response 00:31:33.525 response: 00:31:33.525 { 00:31:33.525 "code": -17, 00:31:33.525 "message": "File exists" 00:31:33.525 } 00:31:33.525 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:33.525 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:33.525 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:33.525 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:33.525 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:33.525 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:33.525 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:33.525 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:33.525 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.525 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.525 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:33.525 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:33.782 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.782 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:33.782 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:33.782 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:33.782 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.783 request: 00:31:33.783 { 00:31:33.783 "name": "nvme_second", 00:31:33.783 "trtype": "tcp", 00:31:33.783 "traddr": "10.0.0.2", 00:31:33.783 "adrfam": "ipv4", 00:31:33.783 "trsvcid": "8009", 00:31:33.783 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:33.783 "wait_for_attach": true, 00:31:33.783 "method": "bdev_nvme_start_discovery", 00:31:33.783 "req_id": 1 00:31:33.783 } 00:31:33.783 Got JSON-RPC error response 00:31:33.783 response: 00:31:33.783 { 00:31:33.783 "code": -17, 00:31:33.783 "message": "File exists" 00:31:33.783 } 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.783 02:18:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.743 [2024-07-14 02:18:40.403115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.743 [2024-07-14 02:18:40.403183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c31b00 with addr=10.0.0.2, port=8010 00:31:34.743 [2024-07-14 02:18:40.403214] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:34.743 [2024-07-14 02:18:40.403229] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:34.743 [2024-07-14 02:18:40.403242] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:36.114 [2024-07-14 02:18:41.405428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.114 [2024-07-14 02:18:41.405462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c31b00 with addr=10.0.0.2, port=8010 00:31:36.114 [2024-07-14 02:18:41.405482] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:36.114 [2024-07-14 02:18:41.405494] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:36.114 [2024-07-14 02:18:41.405505] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:37.048 [2024-07-14 02:18:42.407668] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:37.048 request: 00:31:37.048 { 00:31:37.048 "name": "nvme_second", 00:31:37.048 "trtype": "tcp", 00:31:37.048 "traddr": "10.0.0.2", 00:31:37.048 "adrfam": "ipv4", 00:31:37.048 "trsvcid": "8010", 00:31:37.048 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:37.048 "wait_for_attach": false, 00:31:37.048 "attach_timeout_ms": 3000, 00:31:37.048 "method": "bdev_nvme_start_discovery", 00:31:37.048 "req_id": 1 00:31:37.048 } 00:31:37.048 Got JSON-RPC error response 00:31:37.048 response: 00:31:37.048 { 00:31:37.048 "code": -110, 00:31:37.048 "message": "Connection timed out" 00:31:37.048 } 00:31:37.048 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:37.048 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:37.048 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:37.048 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:37.048 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:37.048 02:18:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:37.048 02:18:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:37.048 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.048 02:18:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1709702 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:37.049 rmmod nvme_tcp 00:31:37.049 rmmod nvme_fabrics 00:31:37.049 rmmod nvme_keyring 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1709669 ']' 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1709669 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1709669 ']' 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1709669 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1709669 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1709669' 00:31:37.049 killing process with pid 1709669 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1709669 00:31:37.049 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1709669 00:31:37.307 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:37.307 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:37.307 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:37.307 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:37.307 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:37.307 02:18:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.307 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:37.307 02:18:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.207 02:18:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:39.207 00:31:39.207 real 0m13.178s 00:31:39.207 user 0m19.078s 00:31:39.207 sys 0m2.771s 00:31:39.207 02:18:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:39.207 02:18:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.207 ************************************ 00:31:39.207 END TEST nvmf_host_discovery 00:31:39.207 ************************************ 00:31:39.207 02:18:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:39.207 02:18:44 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:39.207 02:18:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:39.207 02:18:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:39.207 02:18:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:39.207 ************************************ 00:31:39.208 START TEST nvmf_host_multipath_status 00:31:39.208 ************************************ 00:31:39.208 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:39.466 * Looking for test storage... 00:31:39.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:39.466 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:39.467 02:18:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:41.369 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:41.369 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:41.369 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:41.369 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:41.370 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:41.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:41.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:31:41.370 00:31:41.370 --- 10.0.0.2 ping statistics --- 00:31:41.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.370 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:41.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:41.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:31:41.370 00:31:41.370 --- 10.0.0.1 ping statistics --- 00:31:41.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.370 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:41.370 02:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:41.370 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:41.370 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:41.370 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:41.370 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:41.370 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1712727 00:31:41.370 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:41.370 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1712727 00:31:41.370 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1712727 ']' 00:31:41.370 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:41.370 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:41.370 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:41.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:41.370 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:41.370 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:41.627 [2024-07-14 02:18:47.075041] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:41.628 [2024-07-14 02:18:47.075119] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:41.628 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.628 [2024-07-14 02:18:47.140168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:41.628 [2024-07-14 02:18:47.224297] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:41.628 [2024-07-14 02:18:47.224348] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:41.628 [2024-07-14 02:18:47.224377] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:41.628 [2024-07-14 02:18:47.224389] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:41.628 [2024-07-14 02:18:47.224403] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:41.628 [2024-07-14 02:18:47.224530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:41.628 [2024-07-14 02:18:47.224534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.884 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:41.884 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:41.884 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:41.884 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:41.884 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:41.884 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:41.884 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1712727 00:31:41.884 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:42.141 [2024-07-14 02:18:47.582338] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:42.141 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:42.399 Malloc0 00:31:42.399 02:18:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:42.657 02:18:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:42.915 02:18:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:43.172 [2024-07-14 02:18:48.699107] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.172 02:18:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:43.430 [2024-07-14 02:18:48.939760] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:43.430 02:18:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1713010 00:31:43.430 02:18:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:43.430 02:18:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:43.430 02:18:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1713010 /var/tmp/bdevperf.sock 00:31:43.430 02:18:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1713010 ']' 00:31:43.430 02:18:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:43.430 02:18:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:43.430 02:18:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:43.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:43.430 02:18:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:43.430 02:18:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:43.688 02:18:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:43.688 02:18:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:43.688 02:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:43.946 02:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:44.511 Nvme0n1 00:31:44.511 02:18:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:45.077 Nvme0n1 00:31:45.077 02:18:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:45.077 02:18:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:46.975 02:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:46.975 02:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:47.234 02:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:47.492 02:18:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:48.867 02:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:48.867 02:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:48.867 02:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.867 02:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:48.867 02:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.867 02:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:48.867 02:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.867 02:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:49.126 02:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:49.126 02:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:49.126 02:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.126 02:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:49.384 02:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.384 02:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:49.384 02:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.384 02:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:49.643 02:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.643 02:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:49.643 02:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.643 02:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:49.902 02:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.902 02:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:49.902 02:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.902 02:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:50.161 02:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.161 02:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:50.161 02:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:50.419 02:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:50.678 02:18:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:51.609 02:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:51.609 02:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:51.609 02:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.609 02:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:51.866 02:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:51.866 02:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:51.866 02:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.866 02:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:52.158 02:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.158 02:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:52.158 02:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.158 02:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:52.416 02:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.416 02:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:52.416 02:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.416 02:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:52.674 02:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.674 02:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:52.674 02:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.674 02:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:52.932 02:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.932 02:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:52.932 02:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.932 02:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:53.191 02:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.191 02:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:53.191 02:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:53.449 02:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:53.707 02:18:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:54.642 02:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:54.642 02:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:54.642 02:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.642 02:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:54.900 02:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.900 02:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:54.900 02:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.900 02:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:55.158 02:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:55.158 02:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:55.158 02:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.158 02:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:55.417 02:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.417 02:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:55.417 02:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.417 02:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:55.675 02:19:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.675 02:19:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:55.676 02:19:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.676 02:19:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:55.934 02:19:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.934 02:19:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:55.934 02:19:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.934 02:19:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:56.192 02:19:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.192 02:19:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:56.192 02:19:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:56.450 02:19:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:56.709 02:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:57.644 02:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:57.644 02:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:57.644 02:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.644 02:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:57.901 02:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.901 02:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:57.901 02:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.901 02:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:58.159 02:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:58.159 02:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:58.159 02:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.159 02:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:58.417 02:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.417 02:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:58.417 02:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.417 02:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:58.674 02:19:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.674 02:19:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:58.674 02:19:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.674 02:19:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:58.932 02:19:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.932 02:19:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:58.932 02:19:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.932 02:19:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:59.190 02:19:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:59.190 02:19:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:59.190 02:19:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:59.447 02:19:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:59.704 02:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:00.636 02:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:00.636 02:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:00.636 02:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.636 02:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:00.894 02:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:00.894 02:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:00.894 02:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.894 02:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:01.152 02:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:01.152 02:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:01.152 02:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.152 02:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:01.410 02:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.410 02:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:01.410 02:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.410 02:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:01.668 02:19:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.668 02:19:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:01.668 02:19:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.668 02:19:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:01.925 02:19:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:01.925 02:19:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:01.926 02:19:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.926 02:19:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:02.183 02:19:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:02.183 02:19:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:02.183 02:19:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:02.440 02:19:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:02.698 02:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:03.632 02:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:03.632 02:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:03.632 02:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.632 02:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:03.890 02:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:03.890 02:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:03.890 02:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.890 02:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:04.148 02:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.148 02:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:04.148 02:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.148 02:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:04.406 02:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.406 02:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:04.406 02:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.406 02:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:04.664 02:19:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.664 02:19:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:04.664 02:19:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.664 02:19:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:04.922 02:19:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:04.922 02:19:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:04.923 02:19:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.923 02:19:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:05.181 02:19:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.181 02:19:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:05.439 02:19:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:05.439 02:19:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:05.697 02:19:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:05.955 02:19:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:06.921 02:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:06.921 02:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:06.921 02:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.921 02:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:07.179 02:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.179 02:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:07.179 02:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.179 02:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:07.436 02:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.436 02:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:07.436 02:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.436 02:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:07.694 02:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.694 02:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:07.694 02:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.694 02:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:07.951 02:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.951 02:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:07.951 02:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.951 02:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:08.207 02:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.207 02:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:08.207 02:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.207 02:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:08.465 02:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.465 02:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:08.465 02:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:08.722 02:19:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:08.979 02:19:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:09.910 02:19:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:09.910 02:19:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:09.910 02:19:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.910 02:19:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:10.167 02:19:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:10.167 02:19:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:10.167 02:19:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.167 02:19:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:10.424 02:19:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.424 02:19:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:10.424 02:19:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.424 02:19:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:10.681 02:19:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.681 02:19:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:10.681 02:19:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.681 02:19:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:10.939 02:19:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.939 02:19:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:10.939 02:19:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.939 02:19:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:11.197 02:19:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.197 02:19:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:11.197 02:19:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.197 02:19:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:11.455 02:19:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.455 02:19:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:11.455 02:19:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:11.713 02:19:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:11.971 02:19:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:12.904 02:19:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:12.904 02:19:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:12.904 02:19:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.904 02:19:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:13.162 02:19:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.162 02:19:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:13.162 02:19:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.162 02:19:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:13.419 02:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.419 02:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:13.419 02:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.419 02:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:13.677 02:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.677 02:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:13.677 02:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.677 02:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:13.935 02:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.935 02:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:13.935 02:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.935 02:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:14.193 02:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.193 02:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:14.193 02:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.193 02:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:14.450 02:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.450 02:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:14.450 02:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:14.708 02:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:14.966 02:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:15.900 02:19:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:15.900 02:19:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:15.900 02:19:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.900 02:19:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:16.158 02:19:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.158 02:19:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:16.158 02:19:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.158 02:19:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:16.416 02:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:16.416 02:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:16.416 02:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.416 02:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:16.674 02:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.674 02:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:16.674 02:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.674 02:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:16.932 02:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.932 02:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:16.932 02:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.932 02:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:17.190 02:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.190 02:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:17.190 02:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.190 02:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:17.448 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:17.448 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1713010 00:32:17.448 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1713010 ']' 00:32:17.448 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1713010 00:32:17.448 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:17.448 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:17.448 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1713010 00:32:17.448 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:32:17.448 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:32:17.448 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1713010' 00:32:17.448 killing process with pid 1713010 00:32:17.448 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1713010 00:32:17.448 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1713010 00:32:17.739 Connection closed with partial response: 00:32:17.739 00:32:17.739 00:32:17.739 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1713010 00:32:17.739 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:17.739 [2024-07-14 02:18:48.997473] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:17.739 [2024-07-14 02:18:48.997556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1713010 ] 00:32:17.739 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.739 [2024-07-14 02:18:49.057980] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.739 [2024-07-14 02:18:49.146122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:17.739 Running I/O for 90 seconds... 00:32:17.739 [2024-07-14 02:19:04.952773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.739 [2024-07-14 02:19:04.952837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.952900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.739 [2024-07-14 02:19:04.952921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.952945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.739 [2024-07-14 02:19:04.952962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.952985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.739 [2024-07-14 02:19:04.953002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.739 [2024-07-14 02:19:04.953039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.739 [2024-07-14 02:19:04.953079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.739 [2024-07-14 02:19:04.953118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.953158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.953214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.953251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.953301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.953339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.953379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.953418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.953455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.953491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.953528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.953565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.953602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.953638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.953674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.953710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.953731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.953747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.954611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.954636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.954664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.954683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.954707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.954724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.954746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.739 [2024-07-14 02:19:04.954763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:17.739 [2024-07-14 02:19:04.954786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.954802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.954824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.954840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.954862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.954887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.954911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.954928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.954951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.954967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.954989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.955971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.955994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.740 [2024-07-14 02:19:04.956011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.956034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.956052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.956075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.956092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.956114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.956135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.956158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.956175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.956197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.956214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.956237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.956269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.956292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.956323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.956791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.956815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.956844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.956863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.956897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.956914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.956937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.956954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.956977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.956994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.957017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.957033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.957056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.957072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.957095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.957111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.957139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.957157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.957180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.957197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.957219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.957236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.957259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.957275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.957298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.957314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.957352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.957369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.957391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.957421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.957444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.957460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.957497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-14 02:19:04.957514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:17.740 [2024-07-14 02:19:04.957552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.957570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.957593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.957611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.957633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.957650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.957677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.957694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.957718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.957735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.957758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.957775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.957797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.957813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.957836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.957853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.957883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.957902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.957925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.957941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.957965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.957982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.958965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.958987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.959003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.959025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.959042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.959064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.959081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.959103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.959120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.959742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.959767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.959799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.959818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.959841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.959858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.959890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.959908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.959930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.959947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.959969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.959985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.960008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.960024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.960047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.960064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.960085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.741 [2024-07-14 02:19:04.960102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.960124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.741 [2024-07-14 02:19:04.960156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.960180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.741 [2024-07-14 02:19:04.960196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.960217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.741 [2024-07-14 02:19:04.960233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.960255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.741 [2024-07-14 02:19:04.960271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.960297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.741 [2024-07-14 02:19:04.960328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.960351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.741 [2024-07-14 02:19:04.960366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.960388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.960403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.960423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.960438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.960459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.960475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:17.741 [2024-07-14 02:19:04.960496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-14 02:19:04.960512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.960532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.960548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.960569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.960584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.960604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.960620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.960641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.960657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.960677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.960693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.960714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.960729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.960750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.960772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.960795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.960810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.960832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.960848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.960891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.960909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.960932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.960949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.960971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.960987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.742 [2024-07-14 02:19:04.961926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:17.742 [2024-07-14 02:19:04.961963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.961979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.962017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.962054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.962093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.962139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.962192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.962230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.962267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.962305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.962346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.962382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.743 [2024-07-14 02:19:04.962419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.962455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.962491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.962527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.962563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.962599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.962621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.962637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.963467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.963492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.963520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.963538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.963568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.963584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.963607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.963628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.963651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.963668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.963690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.963707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.963729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.963745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.963768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.963784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.963808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.963824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.963847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.963889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.963914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.963930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.963952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.963983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.964007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.964023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.964046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.964062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.964084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.964100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.964123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.964143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.964166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.975336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.975396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.975414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.975438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.975454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.975474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.975489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.975510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.975525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.975546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.975561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.975581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.975596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.975616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.975632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.975652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.975667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.975687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.975702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.975722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.975737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.975757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.975772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.975797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.975813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.975833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.975864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.975900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.975933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.975956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.975973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.975995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.976012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.976034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.976050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.976073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.976089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.976111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.976128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.976166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.976182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.976204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.976235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.976255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.976270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.976291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.743 [2024-07-14 02:19:04.976306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:17.743 [2024-07-14 02:19:04.976330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.976346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.976367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.976382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.976402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.976417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.976438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.976452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.976473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.976488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.976508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.976523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.976543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.976558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.976579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.976594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.976614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.976629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.976649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.976664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.976685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.976700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.976721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.976735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.976756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.976775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.976796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.976811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.976832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.976862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.976897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.976942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.977672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.977697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.977726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.977744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.977767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.977784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.977807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.977823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.977846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.977863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.977898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.977915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.977939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.977956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.977979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.744 [2024-07-14 02:19:04.978112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.744 [2024-07-14 02:19:04.978152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.744 [2024-07-14 02:19:04.978192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.744 [2024-07-14 02:19:04.978231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.744 [2024-07-14 02:19:04.978271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.744 [2024-07-14 02:19:04.978310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.744 [2024-07-14 02:19:04.978349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.978969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.978985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.979006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.979022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.979044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.979059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.979086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.979103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.979124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.979154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:17.744 [2024-07-14 02:19:04.979178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.744 [2024-07-14 02:19:04.979194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.979962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.979984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.745 [2024-07-14 02:19:04.980484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.980654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.980671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.981466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.981491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.981519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.981538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.981561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.981578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.981602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.981618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.981641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.981658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.981680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.981698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.981721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.981738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.981760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.981777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.981799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.981816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.981854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.981878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.981902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.981918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.981964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.981983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.982006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.982023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.982045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.982062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.982084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.982101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.982124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.745 [2024-07-14 02:19:04.982141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:17.745 [2024-07-14 02:19:04.982179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.982965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.982988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.983831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.983848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.984519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.984543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.984570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.984589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.984612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.984629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.984651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.984668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.984695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.984712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.984735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.984752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.984775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.984792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.984815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.984831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.984853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.984879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.984904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.746 [2024-07-14 02:19:04.984936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.984958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.746 [2024-07-14 02:19:04.984974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.984996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.746 [2024-07-14 02:19:04.985012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.985035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.746 [2024-07-14 02:19:04.985052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.985073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.746 [2024-07-14 02:19:04.985089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.985110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.746 [2024-07-14 02:19:04.985126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.985148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.746 [2024-07-14 02:19:04.985179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.985201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.746 [2024-07-14 02:19:04.985235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:17.746 [2024-07-14 02:19:04.985258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.985962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.985992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.986970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.986992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.987007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.987028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.987044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.987066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.987081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.987103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.987118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.987140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.987171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.987193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.987227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.987249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.747 [2024-07-14 02:19:04.987264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.987284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.987299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.987319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.987333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.987354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.987369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.987389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.987405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.988229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.988253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.988282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.988300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.988323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.988339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.988362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.988379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.988403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.988420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.988443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.988459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.988481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.988503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.988527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.988544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.988566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.988585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.988607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.988624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.988647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.988679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.988702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.988719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.988757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.747 [2024-07-14 02:19:04.988774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:17.747 [2024-07-14 02:19:04.988797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.988814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.988837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.988854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.988883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.988902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.988925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.988941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.988964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.988981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.989969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.989986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.990008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.990025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.990047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.990068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.990091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.990108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.990131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.990148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.990185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.990202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.990224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.990240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.990262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.990279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.990301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.990334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.990357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.990374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.990397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.990413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.990436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.990452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.990475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.990492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.990515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.990532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.990555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.990576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.991283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.991329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.991370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.991410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.991449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.991488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.991527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.991567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.991621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.991659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.991695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.748 [2024-07-14 02:19:04.991732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.748 [2024-07-14 02:19:04.991782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.748 [2024-07-14 02:19:04.991819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.748 [2024-07-14 02:19:04.991879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.748 [2024-07-14 02:19:04.991941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.991965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.748 [2024-07-14 02:19:04.991981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.992003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.748 [2024-07-14 02:19:04.992020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.992043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.992060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.992083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.992100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.992122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.992153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.992176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.992192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.992229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.992245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.992281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.992297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.992322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.992337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.992358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.992373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.992393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.992408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.992428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.992443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.992466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.748 [2024-07-14 02:19:04.992481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:17.748 [2024-07-14 02:19:04.992501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.992516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.992537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.992552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.992572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.992587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.992608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.992622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.992643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.992658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.992679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.992694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.992715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.992730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.992750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.992784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.992807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.992823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.992862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.992890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.992914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.992931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.992954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.992970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.992993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.993970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.993992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.994008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.994045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.749 [2024-07-14 02:19:04.994062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:04.994085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:04.994101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.000991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.001022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.001048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.001066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.001953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.001979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.002972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.002994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.003010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.003032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.003049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.003071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.003092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.003115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.003132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.003154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.003171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.003208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.003223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.003259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.003276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.003299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.003316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.003337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.003354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.003375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.003391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.003413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.003429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.003450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.003466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.003488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.749 [2024-07-14 02:19:05.003505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:17.749 [2024-07-14 02:19:05.003528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.003544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.003567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.003584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.003610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.003626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.003648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.003663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.003685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.003702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.003724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.003740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.003777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.003793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.003829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.003846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.003874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.003908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.003932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.003956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.003979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.003996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.004019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.004036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.004058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.004075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.004099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.004115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.004141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.004158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.004180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.004197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.004219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.004235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.004257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.004274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.004296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.004313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.004974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.004998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.750 [2024-07-14 02:19:05.005493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.750 [2024-07-14 02:19:05.005530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.750 [2024-07-14 02:19:05.005566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.750 [2024-07-14 02:19:05.005603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.750 [2024-07-14 02:19:05.005639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.750 [2024-07-14 02:19:05.005675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.750 [2024-07-14 02:19:05.005711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.005967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.005989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.006966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.006988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.007004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.007025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.007041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.007063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.007079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.750 [2024-07-14 02:19:05.007100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.750 [2024-07-14 02:19:05.007116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.751 [2024-07-14 02:19:05.007726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.007785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.007801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.008620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.008644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.008671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.008689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.008712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.008729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.008752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.008769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.008791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.008807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.008829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.008846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.008876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.008895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.008918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.008935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.008958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.008974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.009968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.009991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.010956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.010972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.011631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.011654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.011681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.011700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.011723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.011740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.011767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.751 [2024-07-14 02:19:05.011784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:17.751 [2024-07-14 02:19:05.011806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.011823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.011846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.011863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.011895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.011912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.011935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.011951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.011974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.011990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.752 [2024-07-14 02:19:05.012199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.752 [2024-07-14 02:19:05.012237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.752 [2024-07-14 02:19:05.012278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.752 [2024-07-14 02:19:05.012329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.752 [2024-07-14 02:19:05.012365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.752 [2024-07-14 02:19:05.012400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.752 [2024-07-14 02:19:05.012435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.012967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.012988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.013972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.013987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.014023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.014059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.014096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.014133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.014185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.014221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.014262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.014298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.014334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.014369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.014405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.752 [2024-07-14 02:19:05.014440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.014476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.014785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.014876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.014926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.014955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.014973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.015000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.015017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.015046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.015068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.015097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.015114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.015142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.015159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.015187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.015220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.015248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.015264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.015291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.015308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.015349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.752 [2024-07-14 02:19:05.015365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:17.752 [2024-07-14 02:19:05.015407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.015423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.015450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.015465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.015491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.015507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.015532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.015548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.015574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.015589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.015614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.015633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.015660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.015676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.015701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.015716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.015742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.015757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.015782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.015798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.015823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.015839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.015889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.015908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.015935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.015952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.015979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.015996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.016968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.016984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.017010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.017025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.017051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.017067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.017093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.017108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.017134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.017164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.017190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.017206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.017231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.017246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.017271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.017293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.017321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.017337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:05.017478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:05.017497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.533501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.753 [2024-07-14 02:19:20.533575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.536946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.536976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.537006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.537026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.537050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.537067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.537090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.537107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.537130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.537146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.537177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.537195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.537218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.537235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.537258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.537275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.537298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.537324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.537348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.537366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.537389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.537406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.537429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.537445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.537468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.537485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.537508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.537525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.537548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.753 [2024-07-14 02:19:20.537580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.538981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.539008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.539037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.539055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.539078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.539095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.539118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.539135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.539163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.539180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.539203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.539219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.539247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.539264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.539287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.539304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.539327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.539343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.539366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.539399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.539422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.753 [2024-07-14 02:19:20.539438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:17.753 [2024-07-14 02:19:20.539475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.754 [2024-07-14 02:19:20.539491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:17.754 [2024-07-14 02:19:20.539513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.754 [2024-07-14 02:19:20.539529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:17.754 [2024-07-14 02:19:20.539550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.754 [2024-07-14 02:19:20.539566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:17.754 [2024-07-14 02:19:20.539587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.754 [2024-07-14 02:19:20.539603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:17.754 [2024-07-14 02:19:20.539625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.754 [2024-07-14 02:19:20.539641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:17.754 Received shutdown signal, test time was about 32.336083 seconds 00:32:17.754 00:32:17.754 Latency(us) 00:32:17.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.754 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:17.754 Verification LBA range: start 0x0 length 0x4000 00:32:17.754 Nvme0n1 : 32.34 7985.49 31.19 0.00 0.00 15997.67 235.14 4076242.11 00:32:17.754 =================================================================================================================== 00:32:17.754 Total : 7985.49 31.19 0.00 0.00 15997.67 235.14 4076242.11 00:32:17.754 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:18.011 rmmod nvme_tcp 00:32:18.011 rmmod nvme_fabrics 00:32:18.011 rmmod nvme_keyring 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1712727 ']' 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1712727 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1712727 ']' 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1712727 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1712727 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1712727' 00:32:18.011 killing process with pid 1712727 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1712727 00:32:18.011 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1712727 00:32:18.269 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:18.269 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:18.269 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:18.270 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:18.270 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:18.270 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.270 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:18.270 02:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.834 02:19:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:20.834 00:32:20.834 real 0m41.087s 00:32:20.834 user 2m3.930s 00:32:20.834 sys 0m10.483s 00:32:20.834 02:19:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:20.834 02:19:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:20.834 ************************************ 00:32:20.834 END TEST nvmf_host_multipath_status 00:32:20.834 ************************************ 00:32:20.834 02:19:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:20.834 02:19:25 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:20.834 02:19:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:20.834 02:19:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:20.834 02:19:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:20.834 ************************************ 00:32:20.834 START TEST nvmf_discovery_remove_ifc 00:32:20.834 ************************************ 00:32:20.834 02:19:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:20.834 * Looking for test storage... 00:32:20.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:20.834 02:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:22.747 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:22.747 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:22.747 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:22.748 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:22.748 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:22.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:22.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:32:22.748 00:32:22.748 --- 10.0.0.2 ping statistics --- 00:32:22.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.748 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:22.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:22.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:32:22.748 00:32:22.748 --- 10.0.0.1 ping statistics --- 00:32:22.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.748 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1719196 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1719196 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1719196 ']' 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:22.748 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.749 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:22.749 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:22.749 [2024-07-14 02:19:28.279632] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:22.749 [2024-07-14 02:19:28.279729] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:22.749 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.749 [2024-07-14 02:19:28.343595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.749 [2024-07-14 02:19:28.431223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:22.749 [2024-07-14 02:19:28.431285] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:22.749 [2024-07-14 02:19:28.431313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:22.749 [2024-07-14 02:19:28.431325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:22.749 [2024-07-14 02:19:28.431334] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:22.749 [2024-07-14 02:19:28.431361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.007 [2024-07-14 02:19:28.574804] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.007 [2024-07-14 02:19:28.582998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:23.007 null0 00:32:23.007 [2024-07-14 02:19:28.614961] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1719230 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1719230 /tmp/host.sock 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1719230 ']' 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:23.007 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:23.007 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.007 [2024-07-14 02:19:28.683255] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:23.007 [2024-07-14 02:19:28.683333] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719230 ] 00:32:23.266 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.266 [2024-07-14 02:19:28.756369] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.266 [2024-07-14 02:19:28.853470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.266 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:23.266 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:23.266 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:23.266 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:23.266 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.266 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.266 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.266 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:23.266 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.266 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.525 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.525 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:23.525 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.525 02:19:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:24.458 [2024-07-14 02:19:30.044138] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:24.458 [2024-07-14 02:19:30.044177] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:24.458 [2024-07-14 02:19:30.044211] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:24.458 [2024-07-14 02:19:30.130491] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:24.717 [2024-07-14 02:19:30.356920] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:24.717 [2024-07-14 02:19:30.356988] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:24.717 [2024-07-14 02:19:30.357028] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:24.717 [2024-07-14 02:19:30.357052] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:24.717 [2024-07-14 02:19:30.357092] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:24.717 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.717 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:24.717 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:24.717 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.717 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:24.717 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.717 [2024-07-14 02:19:30.361570] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xa8a300 was disconnected and freed. delete nvme_qpair. 00:32:24.717 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:24.717 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:24.717 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:24.717 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.717 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:24.717 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:24.973 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:24.973 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:24.973 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:24.973 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.973 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:24.973 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.973 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:24.973 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:24.973 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:24.973 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.973 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:24.973 02:19:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:25.905 02:19:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:25.905 02:19:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.906 02:19:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:25.906 02:19:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.906 02:19:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:25.906 02:19:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:25.906 02:19:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:25.906 02:19:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.906 02:19:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:25.906 02:19:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:27.276 02:19:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:27.276 02:19:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.276 02:19:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:27.276 02:19:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.276 02:19:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:27.276 02:19:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:27.276 02:19:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:27.276 02:19:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.276 02:19:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:27.276 02:19:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:28.208 02:19:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:28.208 02:19:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.208 02:19:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:28.208 02:19:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.208 02:19:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:28.208 02:19:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:28.208 02:19:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:28.208 02:19:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.208 02:19:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:28.208 02:19:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:29.138 02:19:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:29.138 02:19:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:29.138 02:19:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:29.138 02:19:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.138 02:19:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:29.138 02:19:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:29.138 02:19:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:29.138 02:19:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.138 02:19:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:29.138 02:19:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:30.070 02:19:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:30.070 02:19:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:30.070 02:19:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.070 02:19:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:30.070 02:19:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:30.070 02:19:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:30.070 02:19:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:30.070 02:19:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.070 02:19:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:30.070 02:19:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:30.327 [2024-07-14 02:19:35.797830] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:30.327 [2024-07-14 02:19:35.797917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.327 [2024-07-14 02:19:35.797956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.327 [2024-07-14 02:19:35.797974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.327 [2024-07-14 02:19:35.797987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.327 [2024-07-14 02:19:35.798001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.327 [2024-07-14 02:19:35.798015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.327 [2024-07-14 02:19:35.798030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.327 [2024-07-14 02:19:35.798043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.327 [2024-07-14 02:19:35.798056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.327 [2024-07-14 02:19:35.798069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.327 [2024-07-14 02:19:35.798082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa50ce0 is same with the state(5) to be set 00:32:30.327 [2024-07-14 02:19:35.807850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa50ce0 (9): Bad file descriptor 00:32:30.327 [2024-07-14 02:19:35.817915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:31.258 02:19:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:31.258 02:19:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:31.259 02:19:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:31.259 02:19:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.259 02:19:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:31.259 02:19:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:31.259 02:19:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:31.259 [2024-07-14 02:19:36.861896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:31.259 [2024-07-14 02:19:36.861942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa50ce0 with addr=10.0.0.2, port=4420 00:32:31.259 [2024-07-14 02:19:36.861965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa50ce0 is same with the state(5) to be set 00:32:31.259 [2024-07-14 02:19:36.861996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa50ce0 (9): Bad file descriptor 00:32:31.259 [2024-07-14 02:19:36.862397] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:31.259 [2024-07-14 02:19:36.862433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:31.259 [2024-07-14 02:19:36.862452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:31.259 [2024-07-14 02:19:36.862479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:31.259 [2024-07-14 02:19:36.862505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:31.259 [2024-07-14 02:19:36.862524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:31.259 02:19:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.259 02:19:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:31.259 02:19:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:32.191 [2024-07-14 02:19:37.865036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:32.191 [2024-07-14 02:19:37.865101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:32.191 [2024-07-14 02:19:37.865132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:32.191 [2024-07-14 02:19:37.865148] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:32.191 [2024-07-14 02:19:37.865199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:32.191 [2024-07-14 02:19:37.865254] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:32.191 [2024-07-14 02:19:37.865314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.191 [2024-07-14 02:19:37.865339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.191 [2024-07-14 02:19:37.865363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.191 [2024-07-14 02:19:37.865378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.191 [2024-07-14 02:19:37.865393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.191 [2024-07-14 02:19:37.865408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.191 [2024-07-14 02:19:37.865424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.191 [2024-07-14 02:19:37.865439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.191 [2024-07-14 02:19:37.865455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.191 [2024-07-14 02:19:37.865469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.191 [2024-07-14 02:19:37.865485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:32.191 [2024-07-14 02:19:37.865650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa50160 (9): Bad file descriptor 00:32:32.191 [2024-07-14 02:19:37.866671] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:32.191 [2024-07-14 02:19:37.866697] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:32.191 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:32.191 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:32.191 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:32.191 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.191 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:32.191 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:32.449 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:32.449 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.449 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:32.449 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:32.449 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:32.449 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:32.449 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:32.449 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:32.449 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:32.449 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.449 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:32.449 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:32.449 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:32.449 02:19:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.449 02:19:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:32.449 02:19:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:33.382 02:19:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:33.382 02:19:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.382 02:19:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:33.382 02:19:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.382 02:19:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:33.382 02:19:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:33.382 02:19:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:33.382 02:19:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.382 02:19:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:33.382 02:19:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:34.316 [2024-07-14 02:19:39.919954] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:34.316 [2024-07-14 02:19:39.919996] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:34.316 [2024-07-14 02:19:39.920021] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:34.574 [2024-07-14 02:19:40.046452] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:34.574 02:19:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:34.574 02:19:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:34.574 02:19:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:34.574 02:19:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.574 02:19:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:34.574 02:19:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:34.574 02:19:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:34.574 02:19:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.574 02:19:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:34.574 02:19:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:34.574 [2024-07-14 02:19:40.230212] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:34.574 [2024-07-14 02:19:40.230283] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:34.574 [2024-07-14 02:19:40.230322] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:34.574 [2024-07-14 02:19:40.230349] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:34.574 [2024-07-14 02:19:40.230367] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:34.575 [2024-07-14 02:19:40.237886] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xa3f920 was disconnected and freed. delete nvme_qpair. 00:32:35.509 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1719230 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1719230 ']' 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1719230 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1719230 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1719230' 00:32:35.510 killing process with pid 1719230 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1719230 00:32:35.510 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1719230 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:35.768 rmmod nvme_tcp 00:32:35.768 rmmod nvme_fabrics 00:32:35.768 rmmod nvme_keyring 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1719196 ']' 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1719196 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1719196 ']' 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1719196 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1719196 00:32:35.768 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:36.027 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:36.027 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1719196' 00:32:36.027 killing process with pid 1719196 00:32:36.027 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1719196 00:32:36.027 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1719196 00:32:36.027 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:36.027 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:36.027 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:36.027 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:36.027 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:36.028 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.028 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:36.028 02:19:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.590 02:19:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:38.590 00:32:38.590 real 0m17.734s 00:32:38.590 user 0m25.717s 00:32:38.590 sys 0m3.026s 00:32:38.590 02:19:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:38.590 02:19:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:38.590 ************************************ 00:32:38.590 END TEST nvmf_discovery_remove_ifc 00:32:38.590 ************************************ 00:32:38.590 02:19:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:38.590 02:19:43 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:38.590 02:19:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:38.590 02:19:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:38.590 02:19:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:38.590 ************************************ 00:32:38.590 START TEST nvmf_identify_kernel_target 00:32:38.590 ************************************ 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:38.590 * Looking for test storage... 00:32:38.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.590 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:38.591 02:19:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:40.500 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:40.500 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:40.500 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:40.501 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:40.501 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:40.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:40.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:32:40.501 00:32:40.501 --- 10.0.0.2 ping statistics --- 00:32:40.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.501 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:40.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:40.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:32:40.501 00:32:40.501 --- 10.0.0.1 ping statistics --- 00:32:40.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.501 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:40.501 02:19:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:41.437 Waiting for block devices as requested 00:32:41.437 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:41.437 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:41.695 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:41.695 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:41.695 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:41.695 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:41.954 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:41.954 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:41.954 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:41.954 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:42.212 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:42.212 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:42.212 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:42.471 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:42.471 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:42.471 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:42.471 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:42.730 No valid GPT data, bailing 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:42.730 00:32:42.730 Discovery Log Number of Records 2, Generation counter 2 00:32:42.730 =====Discovery Log Entry 0====== 00:32:42.730 trtype: tcp 00:32:42.730 adrfam: ipv4 00:32:42.730 subtype: current discovery subsystem 00:32:42.730 treq: not specified, sq flow control disable supported 00:32:42.730 portid: 1 00:32:42.730 trsvcid: 4420 00:32:42.730 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:42.730 traddr: 10.0.0.1 00:32:42.730 eflags: none 00:32:42.730 sectype: none 00:32:42.730 =====Discovery Log Entry 1====== 00:32:42.730 trtype: tcp 00:32:42.730 adrfam: ipv4 00:32:42.730 subtype: nvme subsystem 00:32:42.730 treq: not specified, sq flow control disable supported 00:32:42.730 portid: 1 00:32:42.730 trsvcid: 4420 00:32:42.730 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:42.730 traddr: 10.0.0.1 00:32:42.730 eflags: none 00:32:42.730 sectype: none 00:32:42.730 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:42.730 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:42.990 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.990 ===================================================== 00:32:42.990 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:42.990 ===================================================== 00:32:42.990 Controller Capabilities/Features 00:32:42.990 ================================ 00:32:42.990 Vendor ID: 0000 00:32:42.990 Subsystem Vendor ID: 0000 00:32:42.990 Serial Number: 1cf6a90c703256f2e6f3 00:32:42.990 Model Number: Linux 00:32:42.990 Firmware Version: 6.7.0-68 00:32:42.990 Recommended Arb Burst: 0 00:32:42.990 IEEE OUI Identifier: 00 00 00 00:32:42.990 Multi-path I/O 00:32:42.990 May have multiple subsystem ports: No 00:32:42.990 May have multiple controllers: No 00:32:42.990 Associated with SR-IOV VF: No 00:32:42.990 Max Data Transfer Size: Unlimited 00:32:42.990 Max Number of Namespaces: 0 00:32:42.990 Max Number of I/O Queues: 1024 00:32:42.990 NVMe Specification Version (VS): 1.3 00:32:42.990 NVMe Specification Version (Identify): 1.3 00:32:42.990 Maximum Queue Entries: 1024 00:32:42.990 Contiguous Queues Required: No 00:32:42.990 Arbitration Mechanisms Supported 00:32:42.990 Weighted Round Robin: Not Supported 00:32:42.990 Vendor Specific: Not Supported 00:32:42.990 Reset Timeout: 7500 ms 00:32:42.990 Doorbell Stride: 4 bytes 00:32:42.990 NVM Subsystem Reset: Not Supported 00:32:42.990 Command Sets Supported 00:32:42.990 NVM Command Set: Supported 00:32:42.990 Boot Partition: Not Supported 00:32:42.990 Memory Page Size Minimum: 4096 bytes 00:32:42.990 Memory Page Size Maximum: 4096 bytes 00:32:42.990 Persistent Memory Region: Not Supported 00:32:42.990 Optional Asynchronous Events Supported 00:32:42.991 Namespace Attribute Notices: Not Supported 00:32:42.991 Firmware Activation Notices: Not Supported 00:32:42.991 ANA Change Notices: Not Supported 00:32:42.991 PLE Aggregate Log Change Notices: Not Supported 00:32:42.991 LBA Status Info Alert Notices: Not Supported 00:32:42.991 EGE Aggregate Log Change Notices: Not Supported 00:32:42.991 Normal NVM Subsystem Shutdown event: Not Supported 00:32:42.991 Zone Descriptor Change Notices: Not Supported 00:32:42.991 Discovery Log Change Notices: Supported 00:32:42.991 Controller Attributes 00:32:42.991 128-bit Host Identifier: Not Supported 00:32:42.991 Non-Operational Permissive Mode: Not Supported 00:32:42.991 NVM Sets: Not Supported 00:32:42.991 Read Recovery Levels: Not Supported 00:32:42.991 Endurance Groups: Not Supported 00:32:42.991 Predictable Latency Mode: Not Supported 00:32:42.991 Traffic Based Keep ALive: Not Supported 00:32:42.991 Namespace Granularity: Not Supported 00:32:42.991 SQ Associations: Not Supported 00:32:42.991 UUID List: Not Supported 00:32:42.991 Multi-Domain Subsystem: Not Supported 00:32:42.991 Fixed Capacity Management: Not Supported 00:32:42.991 Variable Capacity Management: Not Supported 00:32:42.991 Delete Endurance Group: Not Supported 00:32:42.991 Delete NVM Set: Not Supported 00:32:42.991 Extended LBA Formats Supported: Not Supported 00:32:42.991 Flexible Data Placement Supported: Not Supported 00:32:42.991 00:32:42.991 Controller Memory Buffer Support 00:32:42.991 ================================ 00:32:42.991 Supported: No 00:32:42.991 00:32:42.991 Persistent Memory Region Support 00:32:42.991 ================================ 00:32:42.991 Supported: No 00:32:42.991 00:32:42.991 Admin Command Set Attributes 00:32:42.991 ============================ 00:32:42.991 Security Send/Receive: Not Supported 00:32:42.991 Format NVM: Not Supported 00:32:42.991 Firmware Activate/Download: Not Supported 00:32:42.991 Namespace Management: Not Supported 00:32:42.991 Device Self-Test: Not Supported 00:32:42.991 Directives: Not Supported 00:32:42.991 NVMe-MI: Not Supported 00:32:42.991 Virtualization Management: Not Supported 00:32:42.991 Doorbell Buffer Config: Not Supported 00:32:42.991 Get LBA Status Capability: Not Supported 00:32:42.991 Command & Feature Lockdown Capability: Not Supported 00:32:42.991 Abort Command Limit: 1 00:32:42.991 Async Event Request Limit: 1 00:32:42.991 Number of Firmware Slots: N/A 00:32:42.991 Firmware Slot 1 Read-Only: N/A 00:32:42.991 Firmware Activation Without Reset: N/A 00:32:42.991 Multiple Update Detection Support: N/A 00:32:42.991 Firmware Update Granularity: No Information Provided 00:32:42.991 Per-Namespace SMART Log: No 00:32:42.991 Asymmetric Namespace Access Log Page: Not Supported 00:32:42.991 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:42.991 Command Effects Log Page: Not Supported 00:32:42.991 Get Log Page Extended Data: Supported 00:32:42.991 Telemetry Log Pages: Not Supported 00:32:42.991 Persistent Event Log Pages: Not Supported 00:32:42.991 Supported Log Pages Log Page: May Support 00:32:42.991 Commands Supported & Effects Log Page: Not Supported 00:32:42.991 Feature Identifiers & Effects Log Page:May Support 00:32:42.991 NVMe-MI Commands & Effects Log Page: May Support 00:32:42.991 Data Area 4 for Telemetry Log: Not Supported 00:32:42.991 Error Log Page Entries Supported: 1 00:32:42.991 Keep Alive: Not Supported 00:32:42.991 00:32:42.991 NVM Command Set Attributes 00:32:42.991 ========================== 00:32:42.991 Submission Queue Entry Size 00:32:42.991 Max: 1 00:32:42.991 Min: 1 00:32:42.991 Completion Queue Entry Size 00:32:42.991 Max: 1 00:32:42.991 Min: 1 00:32:42.991 Number of Namespaces: 0 00:32:42.991 Compare Command: Not Supported 00:32:42.991 Write Uncorrectable Command: Not Supported 00:32:42.991 Dataset Management Command: Not Supported 00:32:42.991 Write Zeroes Command: Not Supported 00:32:42.991 Set Features Save Field: Not Supported 00:32:42.991 Reservations: Not Supported 00:32:42.991 Timestamp: Not Supported 00:32:42.991 Copy: Not Supported 00:32:42.991 Volatile Write Cache: Not Present 00:32:42.991 Atomic Write Unit (Normal): 1 00:32:42.991 Atomic Write Unit (PFail): 1 00:32:42.991 Atomic Compare & Write Unit: 1 00:32:42.991 Fused Compare & Write: Not Supported 00:32:42.991 Scatter-Gather List 00:32:42.991 SGL Command Set: Supported 00:32:42.991 SGL Keyed: Not Supported 00:32:42.991 SGL Bit Bucket Descriptor: Not Supported 00:32:42.991 SGL Metadata Pointer: Not Supported 00:32:42.991 Oversized SGL: Not Supported 00:32:42.991 SGL Metadata Address: Not Supported 00:32:42.991 SGL Offset: Supported 00:32:42.991 Transport SGL Data Block: Not Supported 00:32:42.991 Replay Protected Memory Block: Not Supported 00:32:42.991 00:32:42.991 Firmware Slot Information 00:32:42.991 ========================= 00:32:42.991 Active slot: 0 00:32:42.991 00:32:42.991 00:32:42.991 Error Log 00:32:42.991 ========= 00:32:42.991 00:32:42.991 Active Namespaces 00:32:42.991 ================= 00:32:42.991 Discovery Log Page 00:32:42.991 ================== 00:32:42.991 Generation Counter: 2 00:32:42.991 Number of Records: 2 00:32:42.991 Record Format: 0 00:32:42.991 00:32:42.991 Discovery Log Entry 0 00:32:42.991 ---------------------- 00:32:42.991 Transport Type: 3 (TCP) 00:32:42.991 Address Family: 1 (IPv4) 00:32:42.991 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:42.991 Entry Flags: 00:32:42.991 Duplicate Returned Information: 0 00:32:42.991 Explicit Persistent Connection Support for Discovery: 0 00:32:42.991 Transport Requirements: 00:32:42.991 Secure Channel: Not Specified 00:32:42.991 Port ID: 1 (0x0001) 00:32:42.991 Controller ID: 65535 (0xffff) 00:32:42.991 Admin Max SQ Size: 32 00:32:42.991 Transport Service Identifier: 4420 00:32:42.991 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:42.991 Transport Address: 10.0.0.1 00:32:42.991 Discovery Log Entry 1 00:32:42.991 ---------------------- 00:32:42.991 Transport Type: 3 (TCP) 00:32:42.991 Address Family: 1 (IPv4) 00:32:42.991 Subsystem Type: 2 (NVM Subsystem) 00:32:42.991 Entry Flags: 00:32:42.991 Duplicate Returned Information: 0 00:32:42.991 Explicit Persistent Connection Support for Discovery: 0 00:32:42.991 Transport Requirements: 00:32:42.991 Secure Channel: Not Specified 00:32:42.991 Port ID: 1 (0x0001) 00:32:42.991 Controller ID: 65535 (0xffff) 00:32:42.991 Admin Max SQ Size: 32 00:32:42.991 Transport Service Identifier: 4420 00:32:42.991 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:42.991 Transport Address: 10.0.0.1 00:32:42.991 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:42.991 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.991 get_feature(0x01) failed 00:32:42.991 get_feature(0x02) failed 00:32:42.991 get_feature(0x04) failed 00:32:42.991 ===================================================== 00:32:42.991 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:42.991 ===================================================== 00:32:42.991 Controller Capabilities/Features 00:32:42.991 ================================ 00:32:42.991 Vendor ID: 0000 00:32:42.991 Subsystem Vendor ID: 0000 00:32:42.991 Serial Number: 83780c92e0c818a1d950 00:32:42.991 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:42.991 Firmware Version: 6.7.0-68 00:32:42.991 Recommended Arb Burst: 6 00:32:42.991 IEEE OUI Identifier: 00 00 00 00:32:42.991 Multi-path I/O 00:32:42.991 May have multiple subsystem ports: Yes 00:32:42.991 May have multiple controllers: Yes 00:32:42.991 Associated with SR-IOV VF: No 00:32:42.991 Max Data Transfer Size: Unlimited 00:32:42.991 Max Number of Namespaces: 1024 00:32:42.991 Max Number of I/O Queues: 128 00:32:42.991 NVMe Specification Version (VS): 1.3 00:32:42.991 NVMe Specification Version (Identify): 1.3 00:32:42.991 Maximum Queue Entries: 1024 00:32:42.991 Contiguous Queues Required: No 00:32:42.991 Arbitration Mechanisms Supported 00:32:42.991 Weighted Round Robin: Not Supported 00:32:42.991 Vendor Specific: Not Supported 00:32:42.991 Reset Timeout: 7500 ms 00:32:42.991 Doorbell Stride: 4 bytes 00:32:42.991 NVM Subsystem Reset: Not Supported 00:32:42.991 Command Sets Supported 00:32:42.991 NVM Command Set: Supported 00:32:42.991 Boot Partition: Not Supported 00:32:42.991 Memory Page Size Minimum: 4096 bytes 00:32:42.991 Memory Page Size Maximum: 4096 bytes 00:32:42.991 Persistent Memory Region: Not Supported 00:32:42.991 Optional Asynchronous Events Supported 00:32:42.991 Namespace Attribute Notices: Supported 00:32:42.991 Firmware Activation Notices: Not Supported 00:32:42.991 ANA Change Notices: Supported 00:32:42.991 PLE Aggregate Log Change Notices: Not Supported 00:32:42.991 LBA Status Info Alert Notices: Not Supported 00:32:42.991 EGE Aggregate Log Change Notices: Not Supported 00:32:42.991 Normal NVM Subsystem Shutdown event: Not Supported 00:32:42.991 Zone Descriptor Change Notices: Not Supported 00:32:42.991 Discovery Log Change Notices: Not Supported 00:32:42.991 Controller Attributes 00:32:42.991 128-bit Host Identifier: Supported 00:32:42.992 Non-Operational Permissive Mode: Not Supported 00:32:42.992 NVM Sets: Not Supported 00:32:42.992 Read Recovery Levels: Not Supported 00:32:42.992 Endurance Groups: Not Supported 00:32:42.992 Predictable Latency Mode: Not Supported 00:32:42.992 Traffic Based Keep ALive: Supported 00:32:42.992 Namespace Granularity: Not Supported 00:32:42.992 SQ Associations: Not Supported 00:32:42.992 UUID List: Not Supported 00:32:42.992 Multi-Domain Subsystem: Not Supported 00:32:42.992 Fixed Capacity Management: Not Supported 00:32:42.992 Variable Capacity Management: Not Supported 00:32:42.992 Delete Endurance Group: Not Supported 00:32:42.992 Delete NVM Set: Not Supported 00:32:42.992 Extended LBA Formats Supported: Not Supported 00:32:42.992 Flexible Data Placement Supported: Not Supported 00:32:42.992 00:32:42.992 Controller Memory Buffer Support 00:32:42.992 ================================ 00:32:42.992 Supported: No 00:32:42.992 00:32:42.992 Persistent Memory Region Support 00:32:42.992 ================================ 00:32:42.992 Supported: No 00:32:42.992 00:32:42.992 Admin Command Set Attributes 00:32:42.992 ============================ 00:32:42.992 Security Send/Receive: Not Supported 00:32:42.992 Format NVM: Not Supported 00:32:42.992 Firmware Activate/Download: Not Supported 00:32:42.992 Namespace Management: Not Supported 00:32:42.992 Device Self-Test: Not Supported 00:32:42.992 Directives: Not Supported 00:32:42.992 NVMe-MI: Not Supported 00:32:42.992 Virtualization Management: Not Supported 00:32:42.992 Doorbell Buffer Config: Not Supported 00:32:42.992 Get LBA Status Capability: Not Supported 00:32:42.992 Command & Feature Lockdown Capability: Not Supported 00:32:42.992 Abort Command Limit: 4 00:32:42.992 Async Event Request Limit: 4 00:32:42.992 Number of Firmware Slots: N/A 00:32:42.992 Firmware Slot 1 Read-Only: N/A 00:32:42.992 Firmware Activation Without Reset: N/A 00:32:42.992 Multiple Update Detection Support: N/A 00:32:42.992 Firmware Update Granularity: No Information Provided 00:32:42.992 Per-Namespace SMART Log: Yes 00:32:42.992 Asymmetric Namespace Access Log Page: Supported 00:32:42.992 ANA Transition Time : 10 sec 00:32:42.992 00:32:42.992 Asymmetric Namespace Access Capabilities 00:32:42.992 ANA Optimized State : Supported 00:32:42.992 ANA Non-Optimized State : Supported 00:32:42.992 ANA Inaccessible State : Supported 00:32:42.992 ANA Persistent Loss State : Supported 00:32:42.992 ANA Change State : Supported 00:32:42.992 ANAGRPID is not changed : No 00:32:42.992 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:42.992 00:32:42.992 ANA Group Identifier Maximum : 128 00:32:42.992 Number of ANA Group Identifiers : 128 00:32:42.992 Max Number of Allowed Namespaces : 1024 00:32:42.992 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:42.992 Command Effects Log Page: Supported 00:32:42.992 Get Log Page Extended Data: Supported 00:32:42.992 Telemetry Log Pages: Not Supported 00:32:42.992 Persistent Event Log Pages: Not Supported 00:32:42.992 Supported Log Pages Log Page: May Support 00:32:42.992 Commands Supported & Effects Log Page: Not Supported 00:32:42.992 Feature Identifiers & Effects Log Page:May Support 00:32:42.992 NVMe-MI Commands & Effects Log Page: May Support 00:32:42.992 Data Area 4 for Telemetry Log: Not Supported 00:32:42.992 Error Log Page Entries Supported: 128 00:32:42.992 Keep Alive: Supported 00:32:42.992 Keep Alive Granularity: 1000 ms 00:32:42.992 00:32:42.992 NVM Command Set Attributes 00:32:42.992 ========================== 00:32:42.992 Submission Queue Entry Size 00:32:42.992 Max: 64 00:32:42.992 Min: 64 00:32:42.992 Completion Queue Entry Size 00:32:42.992 Max: 16 00:32:42.992 Min: 16 00:32:42.992 Number of Namespaces: 1024 00:32:42.992 Compare Command: Not Supported 00:32:42.992 Write Uncorrectable Command: Not Supported 00:32:42.992 Dataset Management Command: Supported 00:32:42.992 Write Zeroes Command: Supported 00:32:42.992 Set Features Save Field: Not Supported 00:32:42.992 Reservations: Not Supported 00:32:42.992 Timestamp: Not Supported 00:32:42.992 Copy: Not Supported 00:32:42.992 Volatile Write Cache: Present 00:32:42.992 Atomic Write Unit (Normal): 1 00:32:42.992 Atomic Write Unit (PFail): 1 00:32:42.992 Atomic Compare & Write Unit: 1 00:32:42.992 Fused Compare & Write: Not Supported 00:32:42.992 Scatter-Gather List 00:32:42.992 SGL Command Set: Supported 00:32:42.992 SGL Keyed: Not Supported 00:32:42.992 SGL Bit Bucket Descriptor: Not Supported 00:32:42.992 SGL Metadata Pointer: Not Supported 00:32:42.992 Oversized SGL: Not Supported 00:32:42.992 SGL Metadata Address: Not Supported 00:32:42.992 SGL Offset: Supported 00:32:42.992 Transport SGL Data Block: Not Supported 00:32:42.992 Replay Protected Memory Block: Not Supported 00:32:42.992 00:32:42.992 Firmware Slot Information 00:32:42.992 ========================= 00:32:42.992 Active slot: 0 00:32:42.992 00:32:42.992 Asymmetric Namespace Access 00:32:42.992 =========================== 00:32:42.992 Change Count : 0 00:32:42.992 Number of ANA Group Descriptors : 1 00:32:42.992 ANA Group Descriptor : 0 00:32:42.992 ANA Group ID : 1 00:32:42.992 Number of NSID Values : 1 00:32:42.992 Change Count : 0 00:32:42.992 ANA State : 1 00:32:42.992 Namespace Identifier : 1 00:32:42.992 00:32:42.992 Commands Supported and Effects 00:32:42.992 ============================== 00:32:42.992 Admin Commands 00:32:42.992 -------------- 00:32:42.992 Get Log Page (02h): Supported 00:32:42.992 Identify (06h): Supported 00:32:42.992 Abort (08h): Supported 00:32:42.992 Set Features (09h): Supported 00:32:42.992 Get Features (0Ah): Supported 00:32:42.992 Asynchronous Event Request (0Ch): Supported 00:32:42.992 Keep Alive (18h): Supported 00:32:42.992 I/O Commands 00:32:42.992 ------------ 00:32:42.992 Flush (00h): Supported 00:32:42.992 Write (01h): Supported LBA-Change 00:32:42.992 Read (02h): Supported 00:32:42.992 Write Zeroes (08h): Supported LBA-Change 00:32:42.992 Dataset Management (09h): Supported 00:32:42.992 00:32:42.992 Error Log 00:32:42.992 ========= 00:32:42.992 Entry: 0 00:32:42.992 Error Count: 0x3 00:32:42.992 Submission Queue Id: 0x0 00:32:42.992 Command Id: 0x5 00:32:42.992 Phase Bit: 0 00:32:42.992 Status Code: 0x2 00:32:42.992 Status Code Type: 0x0 00:32:42.992 Do Not Retry: 1 00:32:42.992 Error Location: 0x28 00:32:42.992 LBA: 0x0 00:32:42.992 Namespace: 0x0 00:32:42.992 Vendor Log Page: 0x0 00:32:42.992 ----------- 00:32:42.992 Entry: 1 00:32:42.992 Error Count: 0x2 00:32:42.992 Submission Queue Id: 0x0 00:32:42.992 Command Id: 0x5 00:32:42.992 Phase Bit: 0 00:32:42.992 Status Code: 0x2 00:32:42.992 Status Code Type: 0x0 00:32:42.992 Do Not Retry: 1 00:32:42.992 Error Location: 0x28 00:32:42.992 LBA: 0x0 00:32:42.992 Namespace: 0x0 00:32:42.992 Vendor Log Page: 0x0 00:32:42.992 ----------- 00:32:42.992 Entry: 2 00:32:42.992 Error Count: 0x1 00:32:42.992 Submission Queue Id: 0x0 00:32:42.992 Command Id: 0x4 00:32:42.992 Phase Bit: 0 00:32:42.992 Status Code: 0x2 00:32:42.992 Status Code Type: 0x0 00:32:42.992 Do Not Retry: 1 00:32:42.992 Error Location: 0x28 00:32:42.992 LBA: 0x0 00:32:42.992 Namespace: 0x0 00:32:42.992 Vendor Log Page: 0x0 00:32:42.992 00:32:42.992 Number of Queues 00:32:42.992 ================ 00:32:42.992 Number of I/O Submission Queues: 128 00:32:42.992 Number of I/O Completion Queues: 128 00:32:42.992 00:32:42.992 ZNS Specific Controller Data 00:32:42.992 ============================ 00:32:42.992 Zone Append Size Limit: 0 00:32:42.992 00:32:42.992 00:32:42.992 Active Namespaces 00:32:42.992 ================= 00:32:42.992 get_feature(0x05) failed 00:32:42.992 Namespace ID:1 00:32:42.992 Command Set Identifier: NVM (00h) 00:32:42.992 Deallocate: Supported 00:32:42.992 Deallocated/Unwritten Error: Not Supported 00:32:42.992 Deallocated Read Value: Unknown 00:32:42.992 Deallocate in Write Zeroes: Not Supported 00:32:42.992 Deallocated Guard Field: 0xFFFF 00:32:42.992 Flush: Supported 00:32:42.992 Reservation: Not Supported 00:32:42.992 Namespace Sharing Capabilities: Multiple Controllers 00:32:42.992 Size (in LBAs): 1953525168 (931GiB) 00:32:42.992 Capacity (in LBAs): 1953525168 (931GiB) 00:32:42.992 Utilization (in LBAs): 1953525168 (931GiB) 00:32:42.992 UUID: 0ed44a5e-b3ea-4d9c-b56d-48b72c5a45d3 00:32:42.992 Thin Provisioning: Not Supported 00:32:42.992 Per-NS Atomic Units: Yes 00:32:42.992 Atomic Boundary Size (Normal): 0 00:32:42.992 Atomic Boundary Size (PFail): 0 00:32:42.992 Atomic Boundary Offset: 0 00:32:42.992 NGUID/EUI64 Never Reused: No 00:32:42.992 ANA group ID: 1 00:32:42.992 Namespace Write Protected: No 00:32:42.992 Number of LBA Formats: 1 00:32:42.992 Current LBA Format: LBA Format #00 00:32:42.992 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:42.992 00:32:42.992 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:42.992 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:42.993 rmmod nvme_tcp 00:32:42.993 rmmod nvme_fabrics 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:42.993 02:19:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.523 02:19:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:45.523 02:19:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:45.523 02:19:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:45.523 02:19:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:45.523 02:19:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:45.523 02:19:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:45.523 02:19:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:45.523 02:19:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:45.523 02:19:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:45.523 02:19:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:45.523 02:19:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:46.087 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:46.345 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:46.345 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:46.345 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:46.345 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:46.345 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:46.345 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:46.345 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:46.345 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:46.345 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:46.345 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:46.345 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:46.345 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:46.345 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:46.345 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:46.345 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:47.279 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:47.538 00:32:47.538 real 0m9.204s 00:32:47.538 user 0m1.831s 00:32:47.538 sys 0m3.278s 00:32:47.538 02:19:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:47.538 02:19:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:47.538 ************************************ 00:32:47.538 END TEST nvmf_identify_kernel_target 00:32:47.538 ************************************ 00:32:47.538 02:19:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:47.538 02:19:53 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:47.538 02:19:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:47.538 02:19:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:47.538 02:19:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.538 ************************************ 00:32:47.538 START TEST nvmf_auth_host 00:32:47.538 ************************************ 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:47.538 * Looking for test storage... 00:32:47.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:47.538 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:47.539 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:47.539 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:47.539 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:47.539 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:47.539 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.539 02:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:47.539 02:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.539 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:47.539 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:47.539 02:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:47.539 02:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.438 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:49.438 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:49.438 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:49.438 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:49.438 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:49.438 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:49.438 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:49.438 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:49.438 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:49.439 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:49.439 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:49.439 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:49.439 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:49.439 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:49.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:49.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:32:49.697 00:32:49.697 --- 10.0.0.2 ping statistics --- 00:32:49.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.697 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:49.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:49.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:32:49.697 00:32:49.697 --- 10.0.0.1 ping statistics --- 00:32:49.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.697 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1726294 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1726294 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1726294 ']' 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:49.697 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3095914e848b58b8cdca7022dfcf1c96 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.yBE 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3095914e848b58b8cdca7022dfcf1c96 0 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3095914e848b58b8cdca7022dfcf1c96 0 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3095914e848b58b8cdca7022dfcf1c96 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.yBE 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.yBE 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.yBE 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8955e719b81e400aeaa8f4cbba85b76d44bf61c2bea26f0a339198c2d7ae7471 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.uv8 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8955e719b81e400aeaa8f4cbba85b76d44bf61c2bea26f0a339198c2d7ae7471 3 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8955e719b81e400aeaa8f4cbba85b76d44bf61c2bea26f0a339198c2d7ae7471 3 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8955e719b81e400aeaa8f4cbba85b76d44bf61c2bea26f0a339198c2d7ae7471 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.uv8 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.uv8 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.uv8 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=75c133744ade2ac4f1e10937a3126f95c48cb6dfcfaa957d 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.3Wo 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 75c133744ade2ac4f1e10937a3126f95c48cb6dfcfaa957d 0 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 75c133744ade2ac4f1e10937a3126f95c48cb6dfcfaa957d 0 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:49.955 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=75c133744ade2ac4f1e10937a3126f95c48cb6dfcfaa957d 00:32:49.956 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:49.956 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:50.213 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.3Wo 00:32:50.213 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.3Wo 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.3Wo 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=83ed95bf6d7ee462a61a96c859b8d91e0876aa62ee529145 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.n9j 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 83ed95bf6d7ee462a61a96c859b8d91e0876aa62ee529145 2 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 83ed95bf6d7ee462a61a96c859b8d91e0876aa62ee529145 2 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=83ed95bf6d7ee462a61a96c859b8d91e0876aa62ee529145 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.n9j 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.n9j 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.n9j 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f4530a62384ddb3d0cb0766faa4c7a13 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.u4L 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f4530a62384ddb3d0cb0766faa4c7a13 1 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f4530a62384ddb3d0cb0766faa4c7a13 1 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f4530a62384ddb3d0cb0766faa4c7a13 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.u4L 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.u4L 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.u4L 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=90bfce24e13f19bd173f9a9ca623aa13 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.xkK 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 90bfce24e13f19bd173f9a9ca623aa13 1 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 90bfce24e13f19bd173f9a9ca623aa13 1 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=90bfce24e13f19bd173f9a9ca623aa13 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.xkK 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.xkK 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.xkK 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d9b870a053509a677b60366726ae9a8733bbb96961f3d2c1 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.cP4 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d9b870a053509a677b60366726ae9a8733bbb96961f3d2c1 2 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d9b870a053509a677b60366726ae9a8733bbb96961f3d2c1 2 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d9b870a053509a677b60366726ae9a8733bbb96961f3d2c1 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.cP4 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.cP4 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.cP4 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:50.214 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4f455b35af119c849741459c4aa53485 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.W0k 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4f455b35af119c849741459c4aa53485 0 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4f455b35af119c849741459c4aa53485 0 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4f455b35af119c849741459c4aa53485 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.W0k 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.W0k 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.W0k 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9e2c3d52e757dc158862a5f22d9bc34a7ac577a47ed32f4eb3161686f8cc9ff5 00:32:50.215 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.OF9 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9e2c3d52e757dc158862a5f22d9bc34a7ac577a47ed32f4eb3161686f8cc9ff5 3 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9e2c3d52e757dc158862a5f22d9bc34a7ac577a47ed32f4eb3161686f8cc9ff5 3 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9e2c3d52e757dc158862a5f22d9bc34a7ac577a47ed32f4eb3161686f8cc9ff5 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.OF9 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.OF9 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.OF9 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1726294 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1726294 ']' 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:50.473 02:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.731 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yBE 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.uv8 ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uv8 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.3Wo 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.n9j ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.n9j 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.u4L 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.xkK ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xkK 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.cP4 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.W0k ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.W0k 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.OF9 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:50.732 02:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:51.664 Waiting for block devices as requested 00:32:51.922 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:51.922 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:52.180 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:52.180 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:52.180 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:52.438 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:52.438 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:52.438 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:52.438 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:52.695 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:52.695 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:52.695 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:52.952 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:52.952 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:52.952 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:52.952 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:53.209 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:53.466 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:53.466 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:53.466 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:53.466 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:53.466 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:53.466 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:53.466 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:53.466 02:19:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:53.466 02:19:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:53.729 No valid GPT data, bailing 00:32:53.729 02:19:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:53.730 00:32:53.730 Discovery Log Number of Records 2, Generation counter 2 00:32:53.730 =====Discovery Log Entry 0====== 00:32:53.730 trtype: tcp 00:32:53.730 adrfam: ipv4 00:32:53.730 subtype: current discovery subsystem 00:32:53.730 treq: not specified, sq flow control disable supported 00:32:53.730 portid: 1 00:32:53.730 trsvcid: 4420 00:32:53.730 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:53.730 traddr: 10.0.0.1 00:32:53.730 eflags: none 00:32:53.730 sectype: none 00:32:53.730 =====Discovery Log Entry 1====== 00:32:53.730 trtype: tcp 00:32:53.730 adrfam: ipv4 00:32:53.730 subtype: nvme subsystem 00:32:53.730 treq: not specified, sq flow control disable supported 00:32:53.730 portid: 1 00:32:53.730 trsvcid: 4420 00:32:53.730 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:53.730 traddr: 10.0.0.1 00:32:53.730 eflags: none 00:32:53.730 sectype: none 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.730 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.988 nvme0n1 00:32:53.988 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.988 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.988 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.988 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.988 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.988 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.988 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.988 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.988 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.988 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.988 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.988 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:53.988 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:53.988 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: ]] 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.989 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.246 nvme0n1 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.246 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.247 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.506 nvme0n1 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: ]] 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.506 02:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.506 nvme0n1 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.506 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: ]] 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.810 nvme0n1 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.810 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.068 nvme0n1 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.068 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: ]] 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.069 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.327 nvme0n1 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.327 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.328 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.328 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.328 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.328 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.328 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.328 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.328 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.328 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.328 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.328 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.328 02:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.328 02:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:55.328 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.328 02:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.585 nvme0n1 00:32:55.585 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.585 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.585 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.585 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.585 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.585 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.585 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.585 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.585 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.585 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.585 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: ]] 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.586 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.843 nvme0n1 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: ]] 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.843 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.101 nvme0n1 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.101 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.359 nvme0n1 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: ]] 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.359 02:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.616 nvme0n1 00:32:56.616 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.616 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.616 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.617 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.874 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.874 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.874 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.874 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.874 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.874 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.874 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.874 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.874 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.874 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.874 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.874 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.874 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:56.874 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.874 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.132 nvme0n1 00:32:57.132 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.132 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.132 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.132 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.132 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.132 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.132 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.132 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.132 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.132 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.132 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.132 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.132 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: ]] 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.133 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.392 nvme0n1 00:32:57.392 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.392 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.392 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.392 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.392 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.392 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.392 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.392 02:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.392 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.392 02:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: ]] 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.392 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.650 nvme0n1 00:32:57.650 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.650 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.650 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.650 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.650 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.650 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.907 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.164 nvme0n1 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: ]] 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.164 02:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.728 nvme0n1 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.728 02:20:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.985 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.985 02:20:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.985 02:20:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.985 02:20:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.985 02:20:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.985 02:20:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.985 02:20:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.985 02:20:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.985 02:20:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.985 02:20:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.985 02:20:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.985 02:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:58.985 02:20:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.985 02:20:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.549 nvme0n1 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: ]] 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.549 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.115 nvme0n1 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: ]] 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.115 02:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.680 nvme0n1 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.680 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.246 nvme0n1 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: ]] 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.246 02:20:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.181 nvme0n1 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.181 02:20:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.555 nvme0n1 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.555 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: ]] 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.556 02:20:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.122 nvme0n1 00:33:04.122 02:20:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.122 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.122 02:20:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.122 02:20:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.122 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.122 02:20:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: ]] 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.381 02:20:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.316 nvme0n1 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.316 02:20:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.250 nvme0n1 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: ]] 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.250 02:20:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.508 02:20:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.508 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.508 02:20:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.508 02:20:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.508 02:20:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.508 02:20:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.508 02:20:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.508 02:20:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.508 02:20:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.508 02:20:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.508 02:20:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.508 02:20:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.508 02:20:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:06.508 02:20:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.508 02:20:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.508 nvme0n1 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.508 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.767 nvme0n1 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: ]] 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.767 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.024 nvme0n1 00:33:07.024 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.024 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.024 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.024 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.024 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.024 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.024 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.024 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.024 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.024 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.024 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.024 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.024 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:07.024 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: ]] 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.025 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.282 nvme0n1 00:33:07.282 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.282 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.282 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.282 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.282 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.282 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.282 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.282 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.282 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.282 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.282 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.282 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.282 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:07.282 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.282 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.283 02:20:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.541 nvme0n1 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.541 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: ]] 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.542 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.799 nvme0n1 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.799 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.800 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.800 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.800 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.800 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.800 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.800 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.800 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:07.800 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.800 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.057 nvme0n1 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: ]] 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.057 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.315 nvme0n1 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: ]] 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.315 02:20:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.573 nvme0n1 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.573 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.833 nvme0n1 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: ]] 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.833 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.834 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.834 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.834 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.834 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.834 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.834 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.834 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.834 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.834 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:08.834 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.834 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.113 nvme0n1 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.113 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.375 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.375 02:20:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.375 02:20:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:09.375 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.375 02:20:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.632 nvme0n1 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: ]] 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.632 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.890 nvme0n1 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: ]] 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.890 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.148 nvme0n1 00:33:10.149 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.149 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.149 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.149 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.149 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.149 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.149 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.149 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.149 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.149 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.407 02:20:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.665 nvme0n1 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: ]] 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.665 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.231 nvme0n1 00:33:11.231 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.231 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.231 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.231 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.231 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.231 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.231 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.231 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.231 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.232 02:20:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.796 nvme0n1 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:11.796 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: ]] 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.797 02:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.363 nvme0n1 00:33:12.363 02:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.363 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.363 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.363 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.363 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.363 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.363 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.363 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.363 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.363 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: ]] 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.622 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.187 nvme0n1 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:13.187 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.188 02:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.776 nvme0n1 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: ]] 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.776 02:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.708 nvme0n1 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:14.708 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.709 02:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.641 nvme0n1 00:33:15.641 02:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.641 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.641 02:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.641 02:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.641 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.641 02:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: ]] 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.898 02:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.832 nvme0n1 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:16.832 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: ]] 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.833 02:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.765 nvme0n1 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.766 02:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.698 nvme0n1 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: ]] 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.698 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.957 nvme0n1 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.957 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.215 nvme0n1 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: ]] 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.215 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.216 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.216 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.216 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.216 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.216 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.216 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.216 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.216 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.216 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.216 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.216 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.216 02:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.216 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:19.216 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.216 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.474 nvme0n1 00:33:19.474 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.474 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.474 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.474 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.474 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.474 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.474 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.474 02:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.474 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.474 02:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: ]] 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.474 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.731 nvme0n1 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.731 nvme0n1 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.731 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: ]] 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:19.988 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.989 nvme0n1 00:33:19.989 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.248 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.507 nvme0n1 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: ]] 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.507 02:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.507 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.765 nvme0n1 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: ]] 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.765 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.023 nvme0n1 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.023 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.024 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.024 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.024 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.024 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.024 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.024 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.024 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.024 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.024 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.024 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:21.024 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.024 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.024 nvme0n1 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: ]] 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.281 02:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.539 nvme0n1 00:33:21.539 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.539 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.539 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.540 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.797 nvme0n1 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: ]] 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.797 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.054 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.054 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.054 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.054 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.054 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.054 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.054 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.054 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.054 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.054 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.054 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.054 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.054 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:22.054 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.054 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.311 nvme0n1 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: ]] 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.311 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.312 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.312 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.312 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.312 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.312 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.312 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.312 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.312 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.312 02:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.312 02:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:22.312 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.312 02:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.570 nvme0n1 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.570 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.828 nvme0n1 00:33:22.828 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.828 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.828 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.828 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.828 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.828 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.086 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.086 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.086 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.086 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.086 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.086 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:23.086 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.086 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:23.086 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.086 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.086 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: ]] 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.087 02:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.683 nvme0n1 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.683 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.249 nvme0n1 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:24.249 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: ]] 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.250 02:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.815 nvme0n1 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: ]] 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.815 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.380 nvme0n1 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.380 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.381 02:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.947 nvme0n1 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA5NTkxNGU4NDhiNThiOGNkY2E3MDIyZGZjZjFjOTYbZtC/: 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: ]] 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk1NWU3MTliODFlNDAwYWVhYThmNGNiYmE4NWI3NmQ0NGJmNjFjMmJlYTI2ZjBhMzM5MTk4YzJkN2FlNzQ3Mbbasxk=: 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.947 02:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.881 nvme0n1 00:33:26.881 02:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.881 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.881 02:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.881 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.881 02:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.881 02:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.140 02:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.074 nvme0n1 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1MzBhNjIzODRkZGIzZDBjYjA3NjZmYWE0YzdhMTPlEhVl: 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: ]] 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZmNlMjRlMTNmMTliZDE3M2Y5YTljYTYyM2FhMTMyGWD+: 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.074 02:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.005 nvme0n1 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDliODcwYTA1MzUwOWE2NzdiNjAzNjY3MjZhZTlhODczM2JiYjk2OTYxZjNkMmMxedRXyQ==: 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: ]] 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0NTViMzVhZjExOWM4NDk3NDE0NTljNGFhNTM0ODWG+I1l: 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.005 02:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.939 nvme0n1 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWUyYzNkNTJlNzU3ZGMxNTg4NjJhNWYyMmQ5YmMzNGE3YWM1NzdhNDdlZDMyZjRlYjMxNjE2ODZmOGNjOWZmNcI4z8I=: 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.939 02:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.196 02:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.196 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.196 02:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:30.196 02:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:30.196 02:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:30.196 02:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.196 02:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.196 02:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:30.196 02:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:30.196 02:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:30.196 02:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:30.196 02:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:30.196 02:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:30.196 02:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.196 02:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.131 nvme0n1 00:33:31.131 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.131 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.131 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzVjMTMzNzQ0YWRlMmFjNGYxZTEwOTM3YTMxMjZmOTVjNDhjYjZkZmNmYWE5NTdkJmoVMA==: 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: ]] 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODNlZDk1YmY2ZDdlZTQ2MmE2MWE5NmM4NTliOGQ5MWUwODc2YWE2MmVlNTI5MTQ1wc2xiw==: 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.132 request: 00:33:31.132 { 00:33:31.132 "name": "nvme0", 00:33:31.132 "trtype": "tcp", 00:33:31.132 "traddr": "10.0.0.1", 00:33:31.132 "adrfam": "ipv4", 00:33:31.132 "trsvcid": "4420", 00:33:31.132 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:31.132 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:31.132 "prchk_reftag": false, 00:33:31.132 "prchk_guard": false, 00:33:31.132 "hdgst": false, 00:33:31.132 "ddgst": false, 00:33:31.132 "method": "bdev_nvme_attach_controller", 00:33:31.132 "req_id": 1 00:33:31.132 } 00:33:31.132 Got JSON-RPC error response 00:33:31.132 response: 00:33:31.132 { 00:33:31.132 "code": -5, 00:33:31.132 "message": "Input/output error" 00:33:31.132 } 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.132 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.391 request: 00:33:31.391 { 00:33:31.391 "name": "nvme0", 00:33:31.391 "trtype": "tcp", 00:33:31.391 "traddr": "10.0.0.1", 00:33:31.391 "adrfam": "ipv4", 00:33:31.391 "trsvcid": "4420", 00:33:31.391 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:31.391 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:31.391 "prchk_reftag": false, 00:33:31.391 "prchk_guard": false, 00:33:31.391 "hdgst": false, 00:33:31.391 "ddgst": false, 00:33:31.392 "dhchap_key": "key2", 00:33:31.392 "method": "bdev_nvme_attach_controller", 00:33:31.392 "req_id": 1 00:33:31.392 } 00:33:31.392 Got JSON-RPC error response 00:33:31.392 response: 00:33:31.392 { 00:33:31.392 "code": -5, 00:33:31.392 "message": "Input/output error" 00:33:31.392 } 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.392 request: 00:33:31.392 { 00:33:31.392 "name": "nvme0", 00:33:31.392 "trtype": "tcp", 00:33:31.392 "traddr": "10.0.0.1", 00:33:31.392 "adrfam": "ipv4", 00:33:31.392 "trsvcid": "4420", 00:33:31.392 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:31.392 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:31.392 "prchk_reftag": false, 00:33:31.392 "prchk_guard": false, 00:33:31.392 "hdgst": false, 00:33:31.392 "ddgst": false, 00:33:31.392 "dhchap_key": "key1", 00:33:31.392 "dhchap_ctrlr_key": "ckey2", 00:33:31.392 "method": "bdev_nvme_attach_controller", 00:33:31.392 "req_id": 1 00:33:31.392 } 00:33:31.392 Got JSON-RPC error response 00:33:31.392 response: 00:33:31.392 { 00:33:31.392 "code": -5, 00:33:31.392 "message": "Input/output error" 00:33:31.392 } 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:31.392 02:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:31.392 rmmod nvme_tcp 00:33:31.392 rmmod nvme_fabrics 00:33:31.392 02:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:31.392 02:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:31.392 02:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:31.392 02:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1726294 ']' 00:33:31.392 02:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1726294 00:33:31.392 02:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1726294 ']' 00:33:31.392 02:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1726294 00:33:31.392 02:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:33:31.392 02:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:31.392 02:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1726294 00:33:31.392 02:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:31.392 02:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:31.392 02:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1726294' 00:33:31.392 killing process with pid 1726294 00:33:31.392 02:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1726294 00:33:31.392 02:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1726294 00:33:31.652 02:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:31.652 02:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:31.652 02:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:31.652 02:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:31.652 02:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:31.652 02:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.652 02:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:31.652 02:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.184 02:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:34.184 02:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:34.184 02:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:34.184 02:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:34.184 02:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:34.184 02:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:34.184 02:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:34.184 02:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:34.184 02:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:34.184 02:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:34.184 02:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:34.184 02:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:34.184 02:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:35.120 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:35.120 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:35.120 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:35.120 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:35.120 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:35.120 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:35.120 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:35.120 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:35.120 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:35.120 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:35.120 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:35.120 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:35.120 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:35.120 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:35.120 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:35.120 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:36.054 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:36.054 02:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.yBE /tmp/spdk.key-null.3Wo /tmp/spdk.key-sha256.u4L /tmp/spdk.key-sha384.cP4 /tmp/spdk.key-sha512.OF9 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:36.054 02:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:36.986 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:36.986 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:36.986 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:36.986 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:36.986 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:36.986 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:36.986 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:36.986 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:36.986 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:36.986 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:36.986 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:36.986 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:36.986 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:36.986 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:36.986 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:36.986 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:36.986 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:37.243 00:33:37.243 real 0m49.802s 00:33:37.243 user 0m47.553s 00:33:37.243 sys 0m5.711s 00:33:37.243 02:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:37.243 02:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.243 ************************************ 00:33:37.243 END TEST nvmf_auth_host 00:33:37.243 ************************************ 00:33:37.243 02:20:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:37.243 02:20:42 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:37.243 02:20:42 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:37.243 02:20:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:37.243 02:20:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:37.243 02:20:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:37.243 ************************************ 00:33:37.243 START TEST nvmf_digest 00:33:37.243 ************************************ 00:33:37.243 02:20:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:37.243 * Looking for test storage... 00:33:37.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:37.500 02:20:42 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:37.500 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:37.500 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.500 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.500 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.500 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.500 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.500 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.500 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.500 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:37.501 02:20:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:39.398 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:39.398 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:39.398 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:39.399 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:39.399 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:39.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:33:39.399 00:33:39.399 --- 10.0.0.2 ping statistics --- 00:33:39.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.399 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:39.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:33:39.399 00:33:39.399 --- 10.0.0.1 ping statistics --- 00:33:39.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.399 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:39.399 02:20:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:39.399 ************************************ 00:33:39.399 START TEST nvmf_digest_clean 00:33:39.399 ************************************ 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1735724 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1735724 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1735724 ']' 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:39.399 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:39.399 [2024-07-14 02:20:45.063896] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:39.399 [2024-07-14 02:20:45.063985] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:39.657 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.657 [2024-07-14 02:20:45.129766] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.657 [2024-07-14 02:20:45.217555] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:39.657 [2024-07-14 02:20:45.217611] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:39.657 [2024-07-14 02:20:45.217624] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:39.657 [2024-07-14 02:20:45.217635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:39.657 [2024-07-14 02:20:45.217644] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:39.657 [2024-07-14 02:20:45.217670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.657 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:39.657 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:39.657 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:39.657 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:39.657 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:39.657 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:39.657 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:39.657 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:39.657 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:39.657 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.658 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:39.915 null0 00:33:39.915 [2024-07-14 02:20:45.409203] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:39.915 [2024-07-14 02:20:45.433415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.915 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.915 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:39.915 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:39.915 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:39.915 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:39.915 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:39.916 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:39.916 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:39.916 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1735792 00:33:39.916 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:39.916 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1735792 /var/tmp/bperf.sock 00:33:39.916 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1735792 ']' 00:33:39.916 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:39.916 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:39.916 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:39.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:39.916 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:39.916 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:39.916 [2024-07-14 02:20:45.480329] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:39.916 [2024-07-14 02:20:45.480392] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735792 ] 00:33:39.916 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.916 [2024-07-14 02:20:45.543933] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.174 [2024-07-14 02:20:45.643121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.174 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:40.174 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:40.174 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:40.174 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:40.174 02:20:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:40.433 02:20:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:40.433 02:20:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:40.722 nvme0n1 00:33:40.722 02:20:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:40.722 02:20:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:40.981 Running I/O for 2 seconds... 00:33:42.881 00:33:42.881 Latency(us) 00:33:42.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.881 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:42.881 nvme0n1 : 2.01 18704.77 73.07 0.00 0.00 6831.65 3155.44 20000.62 00:33:42.881 =================================================================================================================== 00:33:42.881 Total : 18704.77 73.07 0.00 0.00 6831.65 3155.44 20000.62 00:33:42.881 0 00:33:42.881 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:42.881 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:42.881 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:42.881 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:42.881 | select(.opcode=="crc32c") 00:33:42.881 | "\(.module_name) \(.executed)"' 00:33:42.881 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:43.139 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:43.139 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:43.139 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:43.139 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:43.139 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1735792 00:33:43.139 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1735792 ']' 00:33:43.139 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1735792 00:33:43.139 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:43.139 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:43.140 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1735792 00:33:43.140 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:43.140 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:43.140 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1735792' 00:33:43.140 killing process with pid 1735792 00:33:43.140 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1735792 00:33:43.140 Received shutdown signal, test time was about 2.000000 seconds 00:33:43.140 00:33:43.140 Latency(us) 00:33:43.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.140 =================================================================================================================== 00:33:43.140 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:43.140 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1735792 00:33:43.399 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:43.399 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:43.399 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:43.399 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:43.399 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:43.399 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:43.399 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:43.399 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1736291 00:33:43.399 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:43.399 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1736291 /var/tmp/bperf.sock 00:33:43.399 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1736291 ']' 00:33:43.399 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:43.399 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:43.399 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:43.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:43.399 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:43.399 02:20:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:43.399 [2024-07-14 02:20:49.040675] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:43.399 [2024-07-14 02:20:49.040774] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736291 ] 00:33:43.399 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:43.399 Zero copy mechanism will not be used. 00:33:43.399 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.657 [2024-07-14 02:20:49.106334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.657 [2024-07-14 02:20:49.204468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.657 02:20:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:43.657 02:20:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:43.657 02:20:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:43.657 02:20:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:43.657 02:20:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:43.916 02:20:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:43.916 02:20:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:44.505 nvme0n1 00:33:44.505 02:20:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:44.505 02:20:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:44.505 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:44.505 Zero copy mechanism will not be used. 00:33:44.505 Running I/O for 2 seconds... 00:33:46.403 00:33:46.403 Latency(us) 00:33:46.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.403 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:46.403 nvme0n1 : 2.00 2622.56 327.82 0.00 0.00 6096.47 5776.88 13495.56 00:33:46.403 =================================================================================================================== 00:33:46.403 Total : 2622.56 327.82 0.00 0.00 6096.47 5776.88 13495.56 00:33:46.403 0 00:33:46.403 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:46.403 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:46.404 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:46.404 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:46.404 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:46.404 | select(.opcode=="crc32c") 00:33:46.404 | "\(.module_name) \(.executed)"' 00:33:46.662 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:46.662 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:46.662 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:46.662 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:46.662 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1736291 00:33:46.662 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1736291 ']' 00:33:46.662 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1736291 00:33:46.662 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:46.662 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:46.662 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1736291 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1736291' 00:33:46.921 killing process with pid 1736291 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1736291 00:33:46.921 Received shutdown signal, test time was about 2.000000 seconds 00:33:46.921 00:33:46.921 Latency(us) 00:33:46.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.921 =================================================================================================================== 00:33:46.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1736291 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1736703 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1736703 /var/tmp/bperf.sock 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1736703 ']' 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:46.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:46.921 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:47.179 [2024-07-14 02:20:52.629559] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:47.179 [2024-07-14 02:20:52.629634] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736703 ] 00:33:47.179 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.179 [2024-07-14 02:20:52.689862] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.179 [2024-07-14 02:20:52.775011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.179 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:47.179 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:47.179 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:47.179 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:47.179 02:20:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:47.744 02:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:47.744 02:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:48.001 nvme0n1 00:33:48.001 02:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:48.001 02:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:48.259 Running I/O for 2 seconds... 00:33:50.160 00:33:50.160 Latency(us) 00:33:50.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.160 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:50.160 nvme0n1 : 2.00 21173.77 82.71 0.00 0.00 6034.93 2876.30 10971.21 00:33:50.160 =================================================================================================================== 00:33:50.160 Total : 21173.77 82.71 0.00 0.00 6034.93 2876.30 10971.21 00:33:50.160 0 00:33:50.160 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:50.160 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:50.160 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:50.160 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:50.160 | select(.opcode=="crc32c") 00:33:50.160 | "\(.module_name) \(.executed)"' 00:33:50.160 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:50.418 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:50.418 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:50.418 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:50.418 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:50.418 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1736703 00:33:50.418 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1736703 ']' 00:33:50.418 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1736703 00:33:50.418 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:50.418 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:50.418 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1736703 00:33:50.418 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:50.418 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:50.418 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1736703' 00:33:50.418 killing process with pid 1736703 00:33:50.418 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1736703 00:33:50.418 Received shutdown signal, test time was about 2.000000 seconds 00:33:50.418 00:33:50.418 Latency(us) 00:33:50.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.418 =================================================================================================================== 00:33:50.418 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:50.418 02:20:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1736703 00:33:50.676 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:50.676 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:50.676 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:50.676 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:50.676 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:50.676 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:50.676 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:50.676 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1737108 00:33:50.676 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:50.676 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1737108 /var/tmp/bperf.sock 00:33:50.676 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1737108 ']' 00:33:50.676 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:50.676 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:50.676 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:50.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:50.676 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:50.676 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:50.676 [2024-07-14 02:20:56.254127] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:50.676 [2024-07-14 02:20:56.254241] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737108 ] 00:33:50.676 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:50.676 Zero copy mechanism will not be used. 00:33:50.676 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.676 [2024-07-14 02:20:56.320477] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.935 [2024-07-14 02:20:56.412157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.935 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:50.935 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:50.935 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:50.935 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:50.935 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:51.193 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:51.193 02:20:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:51.760 nvme0n1 00:33:51.760 02:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:51.760 02:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:51.760 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:51.760 Zero copy mechanism will not be used. 00:33:51.760 Running I/O for 2 seconds... 00:33:53.661 00:33:53.661 Latency(us) 00:33:53.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.661 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:53.661 nvme0n1 : 2.01 1700.82 212.60 0.00 0.00 9380.61 5606.97 13689.74 00:33:53.661 =================================================================================================================== 00:33:53.661 Total : 1700.82 212.60 0.00 0.00 9380.61 5606.97 13689.74 00:33:53.661 0 00:33:53.661 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:53.661 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:53.661 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:53.661 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:53.661 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:53.661 | select(.opcode=="crc32c") 00:33:53.661 | "\(.module_name) \(.executed)"' 00:33:53.919 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:53.919 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:53.919 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:53.919 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:53.919 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1737108 00:33:53.919 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1737108 ']' 00:33:53.919 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1737108 00:33:53.919 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:53.919 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:53.919 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1737108 00:33:53.919 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:53.919 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:53.919 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1737108' 00:33:53.919 killing process with pid 1737108 00:33:53.919 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1737108 00:33:53.919 Received shutdown signal, test time was about 2.000000 seconds 00:33:53.919 00:33:53.919 Latency(us) 00:33:53.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.919 =================================================================================================================== 00:33:53.919 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:53.919 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1737108 00:33:54.178 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1735724 00:33:54.178 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1735724 ']' 00:33:54.178 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1735724 00:33:54.178 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:54.178 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:54.178 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1735724 00:33:54.178 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:54.178 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:54.178 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1735724' 00:33:54.178 killing process with pid 1735724 00:33:54.178 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1735724 00:33:54.178 02:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1735724 00:33:54.436 00:33:54.436 real 0m15.039s 00:33:54.436 user 0m30.344s 00:33:54.436 sys 0m3.744s 00:33:54.436 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:54.436 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:54.436 ************************************ 00:33:54.437 END TEST nvmf_digest_clean 00:33:54.437 ************************************ 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:54.437 ************************************ 00:33:54.437 START TEST nvmf_digest_error 00:33:54.437 ************************************ 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1737677 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1737677 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1737677 ']' 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:54.437 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:54.696 [2024-07-14 02:21:00.148757] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:54.696 [2024-07-14 02:21:00.148831] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.696 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.696 [2024-07-14 02:21:00.214505] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.696 [2024-07-14 02:21:00.298808] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.696 [2024-07-14 02:21:00.298897] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.696 [2024-07-14 02:21:00.298913] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.696 [2024-07-14 02:21:00.298924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.696 [2024-07-14 02:21:00.298934] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.696 [2024-07-14 02:21:00.298961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.696 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:54.696 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:54.696 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:54.696 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:54.696 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:54.696 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:54.696 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:54.696 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.696 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:54.696 [2024-07-14 02:21:00.387566] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:54.955 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.955 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:54.955 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:54.955 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.955 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:54.955 null0 00:33:54.955 [2024-07-14 02:21:00.502726] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.955 [2024-07-14 02:21:00.526972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.955 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.955 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:54.955 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:54.955 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:54.955 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:54.956 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:54.956 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1737719 00:33:54.956 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:54.956 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1737719 /var/tmp/bperf.sock 00:33:54.956 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1737719 ']' 00:33:54.956 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:54.956 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:54.956 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:54.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:54.956 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:54.956 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:54.956 [2024-07-14 02:21:00.574487] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:54.956 [2024-07-14 02:21:00.574548] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737719 ] 00:33:54.956 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.956 [2024-07-14 02:21:00.639007] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.214 [2024-07-14 02:21:00.732419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.214 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:55.214 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:55.214 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:55.214 02:21:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:55.472 02:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:55.472 02:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.472 02:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:55.472 02:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.472 02:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:55.472 02:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:56.039 nvme0n1 00:33:56.039 02:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:56.039 02:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.039 02:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:56.039 02:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.039 02:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:56.039 02:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:56.039 Running I/O for 2 seconds... 00:33:56.039 [2024-07-14 02:21:01.605475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.039 [2024-07-14 02:21:01.605549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.039 [2024-07-14 02:21:01.605572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.039 [2024-07-14 02:21:01.619191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.039 [2024-07-14 02:21:01.619225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.039 [2024-07-14 02:21:01.619262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.039 [2024-07-14 02:21:01.634143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.039 [2024-07-14 02:21:01.634186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.039 [2024-07-14 02:21:01.634204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.039 [2024-07-14 02:21:01.647248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.039 [2024-07-14 02:21:01.647279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.039 [2024-07-14 02:21:01.647318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.039 [2024-07-14 02:21:01.659295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.039 [2024-07-14 02:21:01.659326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.039 [2024-07-14 02:21:01.659345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.039 [2024-07-14 02:21:01.673585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.039 [2024-07-14 02:21:01.673616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.039 [2024-07-14 02:21:01.673633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.039 [2024-07-14 02:21:01.687999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.039 [2024-07-14 02:21:01.688031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.039 [2024-07-14 02:21:01.688064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.039 [2024-07-14 02:21:01.700040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.039 [2024-07-14 02:21:01.700072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.039 [2024-07-14 02:21:01.700090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.039 [2024-07-14 02:21:01.714688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.039 [2024-07-14 02:21:01.714722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.039 [2024-07-14 02:21:01.714741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.039 [2024-07-14 02:21:01.726747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.039 [2024-07-14 02:21:01.726779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.039 [2024-07-14 02:21:01.726823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.298 [2024-07-14 02:21:01.741347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.741397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.741418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.754359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.754408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.754429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.767458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.767489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.767509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.781246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.781277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.781305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.792925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.792956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.792974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.807929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.807961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.807978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.821200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.821230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.821249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.833996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.834025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.834064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.847675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.847710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.847740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.859878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.859926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.859942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.873159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.873209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.873233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.888117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.888158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.888175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.902991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.903023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.903042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.914875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.914922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.914941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.929306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.929348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.929364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.942525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.942555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.942590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.954684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.954714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.954734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.967139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.967194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.967225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.299 [2024-07-14 02:21:01.980184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.299 [2024-07-14 02:21:01.980239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.299 [2024-07-14 02:21:01.980258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.558 [2024-07-14 02:21:01.995127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.558 [2024-07-14 02:21:01.995181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.558 [2024-07-14 02:21:01.995199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.558 [2024-07-14 02:21:02.008121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.558 [2024-07-14 02:21:02.008154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.558 [2024-07-14 02:21:02.008180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.558 [2024-07-14 02:21:02.020821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.020855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.020892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.035675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.035707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.035728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.046542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.046572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.046588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.060107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.060154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.060176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.073130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.073176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.073193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.086066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.086097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.086115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.097181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.097227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.097244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.110208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.110254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.110271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.122809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.122840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.122858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.136246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.136277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.136295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.149107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.149137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.149155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.161157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.161188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.161206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.173094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.173147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.173166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.185745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.185776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.185793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.198149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.198179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.198211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.212906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.212938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.212955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.223834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.223870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.223889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.559 [2024-07-14 02:21:02.236998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.559 [2024-07-14 02:21:02.237029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.559 [2024-07-14 02:21:02.237046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.820 [2024-07-14 02:21:02.250535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.820 [2024-07-14 02:21:02.250570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.820 [2024-07-14 02:21:02.250589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.820 [2024-07-14 02:21:02.261967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.820 [2024-07-14 02:21:02.262000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.820 [2024-07-14 02:21:02.262018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.820 [2024-07-14 02:21:02.275925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.820 [2024-07-14 02:21:02.275956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.820 [2024-07-14 02:21:02.275973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.820 [2024-07-14 02:21:02.286973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.820 [2024-07-14 02:21:02.287003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.820 [2024-07-14 02:21:02.287020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.820 [2024-07-14 02:21:02.301376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.820 [2024-07-14 02:21:02.301407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.820 [2024-07-14 02:21:02.301423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.820 [2024-07-14 02:21:02.314650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.820 [2024-07-14 02:21:02.314681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.820 [2024-07-14 02:21:02.314699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.820 [2024-07-14 02:21:02.326918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.820 [2024-07-14 02:21:02.326949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.820 [2024-07-14 02:21:02.326967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.820 [2024-07-14 02:21:02.341889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.821 [2024-07-14 02:21:02.341921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.821 [2024-07-14 02:21:02.341938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.821 [2024-07-14 02:21:02.353204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.821 [2024-07-14 02:21:02.353232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.821 [2024-07-14 02:21:02.353248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.821 [2024-07-14 02:21:02.366398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.821 [2024-07-14 02:21:02.366429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.821 [2024-07-14 02:21:02.366446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.821 [2024-07-14 02:21:02.379490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.821 [2024-07-14 02:21:02.379521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.821 [2024-07-14 02:21:02.379539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.821 [2024-07-14 02:21:02.391085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.821 [2024-07-14 02:21:02.391115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.821 [2024-07-14 02:21:02.391152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.821 [2024-07-14 02:21:02.405269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.821 [2024-07-14 02:21:02.405300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.821 [2024-07-14 02:21:02.405318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.821 [2024-07-14 02:21:02.418012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.821 [2024-07-14 02:21:02.418043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.821 [2024-07-14 02:21:02.418061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.821 [2024-07-14 02:21:02.428693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.821 [2024-07-14 02:21:02.428722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.821 [2024-07-14 02:21:02.428738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.821 [2024-07-14 02:21:02.441273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.821 [2024-07-14 02:21:02.441317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.821 [2024-07-14 02:21:02.441334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.821 [2024-07-14 02:21:02.456028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.821 [2024-07-14 02:21:02.456059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.821 [2024-07-14 02:21:02.456077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.821 [2024-07-14 02:21:02.469212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.821 [2024-07-14 02:21:02.469243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.821 [2024-07-14 02:21:02.469261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.821 [2024-07-14 02:21:02.481306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.821 [2024-07-14 02:21:02.481337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.821 [2024-07-14 02:21:02.481370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.821 [2024-07-14 02:21:02.493040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.821 [2024-07-14 02:21:02.493071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.821 [2024-07-14 02:21:02.493088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.821 [2024-07-14 02:21:02.506756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:56.821 [2024-07-14 02:21:02.506790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.821 [2024-07-14 02:21:02.506808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.520216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.520249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.520267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.532835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.532874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.532893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.545531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.545563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.545580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.558269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.558299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.558317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.570568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.570614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.570633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.583608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.583639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.583656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.596011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.596042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.596059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.609164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.609207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.609233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.620789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.620821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.620838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.635709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.635743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.635763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.649241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.649273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.649290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.660438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.660466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.660481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.674151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.674197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.674215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.688513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.688558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.688574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.701630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.701663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.701683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.713041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.713070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.713086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.729012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.729046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.729064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.742594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.742624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.742641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.114 [2024-07-14 02:21:02.755908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.114 [2024-07-14 02:21:02.755953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.114 [2024-07-14 02:21:02.755970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.115 [2024-07-14 02:21:02.767979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.115 [2024-07-14 02:21:02.768011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.115 [2024-07-14 02:21:02.768028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.115 [2024-07-14 02:21:02.782274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.115 [2024-07-14 02:21:02.782309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.115 [2024-07-14 02:21:02.782328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.373 [2024-07-14 02:21:02.797847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.373 [2024-07-14 02:21:02.797892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.373 [2024-07-14 02:21:02.797933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.373 [2024-07-14 02:21:02.809062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.373 [2024-07-14 02:21:02.809096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.373 [2024-07-14 02:21:02.809115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:02.822688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:02.822721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:02.822739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:02.836691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:02.836726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:02.836746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:02.848383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:02.848417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:02.848437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:02.861819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:02.861854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:02.861881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:02.876554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:02.876584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:02.876602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:02.887973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:02.888004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:02.888021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:02.902400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:02.902431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:02.902449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:02.915328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:02.915374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:02.915391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:02.927242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:02.927271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:02.927288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:02.940583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:02.940614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:02.940631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:02.953409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:02.953453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:02.953478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:02.967703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:02.967736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:02.967755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:02.980062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:02.980092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:02.980124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:02.992391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:02.992421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:02.992438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:03.006031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:03.006062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:03.006079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:03.018746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:03.018777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:03.018794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:03.032761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:03.032795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:03.032814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:03.045319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:03.045349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:03.045366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.374 [2024-07-14 02:21:03.058563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.374 [2024-07-14 02:21:03.058598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.374 [2024-07-14 02:21:03.058617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.632 [2024-07-14 02:21:03.072176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.632 [2024-07-14 02:21:03.072210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.632 [2024-07-14 02:21:03.072228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.632 [2024-07-14 02:21:03.085461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.632 [2024-07-14 02:21:03.085493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.632 [2024-07-14 02:21:03.085511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.097889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.097936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.097953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.110907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.110941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.110973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.125691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.125723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.125740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.137568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.137597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.137629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.151517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.151552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.151571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.164708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.164742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.164762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.178097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.178127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.178165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.191663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.191693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.191725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.205400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.205431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.205448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.217889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.217937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.217954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.234007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.234037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.234055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.246892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.246937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.246954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.258123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.258154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.258171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.271602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.271632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.271648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.286197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.286242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.286260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.298309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.298344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.298362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.633 [2024-07-14 02:21:03.312022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.633 [2024-07-14 02:21:03.312070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.633 [2024-07-14 02:21:03.312087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.324716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.324754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.324775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.337880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.337916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.337935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.351456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.351488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.351506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.363434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.363469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.363488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.376594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.376629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.376648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.389911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.389940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.389957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.404749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.404782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.404799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.416207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.416239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.416256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.431190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.431221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.431239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.442853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.442892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.442911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.457631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.457663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.457681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.470810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.470846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.470872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.482895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.482940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.482957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.496306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.496337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.496354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.509695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.509726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.509744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.522189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.522235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.522258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.535408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.535439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.535470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.548627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.548658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.548675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.561665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.561696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.561713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.891 [2024-07-14 02:21:03.576516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e463c0) 00:33:57.891 [2024-07-14 02:21:03.576550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.891 [2024-07-14 02:21:03.576570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.149 00:33:58.149 Latency(us) 00:33:58.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.149 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:58.149 nvme0n1 : 2.00 19346.44 75.57 0.00 0.00 6608.75 3046.21 18350.08 00:33:58.149 =================================================================================================================== 00:33:58.149 Total : 19346.44 75.57 0.00 0.00 6608.75 3046.21 18350.08 00:33:58.149 0 00:33:58.149 02:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:58.149 02:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:58.149 02:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:58.149 02:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:58.149 | .driver_specific 00:33:58.149 | .nvme_error 00:33:58.149 | .status_code 00:33:58.149 | .command_transient_transport_error' 00:33:58.407 02:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 151 > 0 )) 00:33:58.407 02:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1737719 00:33:58.407 02:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1737719 ']' 00:33:58.407 02:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1737719 00:33:58.407 02:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:58.407 02:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:58.407 02:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1737719 00:33:58.407 02:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:58.407 02:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:58.407 02:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1737719' 00:33:58.407 killing process with pid 1737719 00:33:58.407 02:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1737719 00:33:58.407 Received shutdown signal, test time was about 2.000000 seconds 00:33:58.407 00:33:58.407 Latency(us) 00:33:58.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.407 =================================================================================================================== 00:33:58.407 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:58.407 02:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1737719 00:33:58.664 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:58.664 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:58.664 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:58.664 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:58.664 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:58.664 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1738202 00:33:58.664 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:58.664 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1738202 /var/tmp/bperf.sock 00:33:58.664 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1738202 ']' 00:33:58.664 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:58.664 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:58.664 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:58.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:58.664 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:58.664 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:58.664 [2024-07-14 02:21:04.175369] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:58.664 [2024-07-14 02:21:04.175470] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738202 ] 00:33:58.664 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:58.664 Zero copy mechanism will not be used. 00:33:58.664 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.664 [2024-07-14 02:21:04.239769] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.664 [2024-07-14 02:21:04.334887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:58.922 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:58.922 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:58.922 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:58.922 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:59.179 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:59.179 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.179 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:59.179 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.179 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:59.179 02:21:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:59.437 nvme0n1 00:33:59.437 02:21:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:59.437 02:21:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.437 02:21:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:59.437 02:21:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.437 02:21:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:59.437 02:21:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:59.696 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:59.696 Zero copy mechanism will not be used. 00:33:59.696 Running I/O for 2 seconds... 00:33:59.696 [2024-07-14 02:21:05.203508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.696 [2024-07-14 02:21:05.203573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.696 [2024-07-14 02:21:05.203596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.696 [2024-07-14 02:21:05.216672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.696 [2024-07-14 02:21:05.216711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.697 [2024-07-14 02:21:05.216732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.697 [2024-07-14 02:21:05.229611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.697 [2024-07-14 02:21:05.229647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.697 [2024-07-14 02:21:05.229668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.697 [2024-07-14 02:21:05.242665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.697 [2024-07-14 02:21:05.242700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.697 [2024-07-14 02:21:05.242720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.697 [2024-07-14 02:21:05.255576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.697 [2024-07-14 02:21:05.255611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.697 [2024-07-14 02:21:05.255632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.697 [2024-07-14 02:21:05.268438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.697 [2024-07-14 02:21:05.268473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.697 [2024-07-14 02:21:05.268493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.697 [2024-07-14 02:21:05.281342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.697 [2024-07-14 02:21:05.281377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.697 [2024-07-14 02:21:05.281397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.697 [2024-07-14 02:21:05.294144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.697 [2024-07-14 02:21:05.294192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.697 [2024-07-14 02:21:05.294212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.697 [2024-07-14 02:21:05.306978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.697 [2024-07-14 02:21:05.307009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.697 [2024-07-14 02:21:05.307026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.697 [2024-07-14 02:21:05.319609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.697 [2024-07-14 02:21:05.319644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.697 [2024-07-14 02:21:05.319664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.697 [2024-07-14 02:21:05.332710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.697 [2024-07-14 02:21:05.332746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.697 [2024-07-14 02:21:05.332765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.697 [2024-07-14 02:21:05.345648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.697 [2024-07-14 02:21:05.345683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.697 [2024-07-14 02:21:05.345703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.697 [2024-07-14 02:21:05.358002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.697 [2024-07-14 02:21:05.358034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.697 [2024-07-14 02:21:05.358051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.697 [2024-07-14 02:21:05.370971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.697 [2024-07-14 02:21:05.371017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.697 [2024-07-14 02:21:05.371039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.697 [2024-07-14 02:21:05.383104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.697 [2024-07-14 02:21:05.383157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.697 [2024-07-14 02:21:05.383202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.396070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.396105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.396123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.409289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.409325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.409345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.422084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.422115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.422132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.434850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.434892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.434913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.447639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.447673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.447692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.460669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.460704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.460724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.473443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.473477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.473497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.486162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.486212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.486231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.498812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.498846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.498874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.511652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.511686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.511705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.524406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.524440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.524459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.537170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.537217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.537237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.550226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.550274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.550293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.563246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.563280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.563299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.575820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.575855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.575884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.588571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.588605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.588624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.601480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.601515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.601535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.614102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.614133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.614151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.626823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.626858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.626887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.957 [2024-07-14 02:21:05.639808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:33:59.957 [2024-07-14 02:21:05.639843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.957 [2024-07-14 02:21:05.639862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.216 [2024-07-14 02:21:05.652722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.216 [2024-07-14 02:21:05.652757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.216 [2024-07-14 02:21:05.652776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.216 [2024-07-14 02:21:05.665966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.216 [2024-07-14 02:21:05.665996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.216 [2024-07-14 02:21:05.666013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.216 [2024-07-14 02:21:05.678722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.216 [2024-07-14 02:21:05.678757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.216 [2024-07-14 02:21:05.678778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.216 [2024-07-14 02:21:05.691787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.216 [2024-07-14 02:21:05.691822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.216 [2024-07-14 02:21:05.691842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.216 [2024-07-14 02:21:05.704562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.216 [2024-07-14 02:21:05.704602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.216 [2024-07-14 02:21:05.704623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.216 [2024-07-14 02:21:05.717750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.216 [2024-07-14 02:21:05.717783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.216 [2024-07-14 02:21:05.717802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.216 [2024-07-14 02:21:05.730578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.216 [2024-07-14 02:21:05.730615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.216 [2024-07-14 02:21:05.730635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.216 [2024-07-14 02:21:05.743404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.216 [2024-07-14 02:21:05.743439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.216 [2024-07-14 02:21:05.743458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.216 [2024-07-14 02:21:05.756198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.216 [2024-07-14 02:21:05.756244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.216 [2024-07-14 02:21:05.756264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.216 [2024-07-14 02:21:05.769209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.216 [2024-07-14 02:21:05.769256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.216 [2024-07-14 02:21:05.769275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.216 [2024-07-14 02:21:05.782165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.216 [2024-07-14 02:21:05.782209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.216 [2024-07-14 02:21:05.782229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.216 [2024-07-14 02:21:05.794942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.216 [2024-07-14 02:21:05.794973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.216 [2024-07-14 02:21:05.795005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.216 [2024-07-14 02:21:05.807681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.216 [2024-07-14 02:21:05.807715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.217 [2024-07-14 02:21:05.807735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.217 [2024-07-14 02:21:05.820544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.217 [2024-07-14 02:21:05.820578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.217 [2024-07-14 02:21:05.820597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.217 [2024-07-14 02:21:05.833248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.217 [2024-07-14 02:21:05.833282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.217 [2024-07-14 02:21:05.833302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.217 [2024-07-14 02:21:05.846796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.217 [2024-07-14 02:21:05.846831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.217 [2024-07-14 02:21:05.846851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.217 [2024-07-14 02:21:05.858934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.217 [2024-07-14 02:21:05.858967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.217 [2024-07-14 02:21:05.858985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.217 [2024-07-14 02:21:05.870793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.217 [2024-07-14 02:21:05.870822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.217 [2024-07-14 02:21:05.870838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.217 [2024-07-14 02:21:05.882641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.217 [2024-07-14 02:21:05.882670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.217 [2024-07-14 02:21:05.882687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.217 [2024-07-14 02:21:05.895392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.217 [2024-07-14 02:21:05.895426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.217 [2024-07-14 02:21:05.895445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.475 [2024-07-14 02:21:05.908087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.475 [2024-07-14 02:21:05.908118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.475 [2024-07-14 02:21:05.908136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.475 [2024-07-14 02:21:05.920987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.475 [2024-07-14 02:21:05.921017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.475 [2024-07-14 02:21:05.921039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.475 [2024-07-14 02:21:05.933727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.475 [2024-07-14 02:21:05.933761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.475 [2024-07-14 02:21:05.933781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.475 [2024-07-14 02:21:05.946631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.475 [2024-07-14 02:21:05.946665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.475 [2024-07-14 02:21:05.946685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.475 [2024-07-14 02:21:05.959581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.475 [2024-07-14 02:21:05.959613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.475 [2024-07-14 02:21:05.959633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.475 [2024-07-14 02:21:05.972346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.475 [2024-07-14 02:21:05.972380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.475 [2024-07-14 02:21:05.972400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.475 [2024-07-14 02:21:05.985134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.475 [2024-07-14 02:21:05.985179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.475 [2024-07-14 02:21:05.985196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.475 [2024-07-14 02:21:05.997986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.475 [2024-07-14 02:21:05.998016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.475 [2024-07-14 02:21:05.998034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.475 [2024-07-14 02:21:06.010646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.475 [2024-07-14 02:21:06.010680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.475 [2024-07-14 02:21:06.010700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.475 [2024-07-14 02:21:06.023436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.475 [2024-07-14 02:21:06.023470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.475 [2024-07-14 02:21:06.023490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.475 [2024-07-14 02:21:06.036181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.475 [2024-07-14 02:21:06.036215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.475 [2024-07-14 02:21:06.036250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.475 [2024-07-14 02:21:06.048969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.475 [2024-07-14 02:21:06.049000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.475 [2024-07-14 02:21:06.049017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.475 [2024-07-14 02:21:06.061745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.475 [2024-07-14 02:21:06.061780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.475 [2024-07-14 02:21:06.061800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.475 [2024-07-14 02:21:06.074673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.475 [2024-07-14 02:21:06.074707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.475 [2024-07-14 02:21:06.074726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.475 [2024-07-14 02:21:06.087560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.475 [2024-07-14 02:21:06.087595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.475 [2024-07-14 02:21:06.087614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.475 [2024-07-14 02:21:06.100363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.475 [2024-07-14 02:21:06.100397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.475 [2024-07-14 02:21:06.100417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.476 [2024-07-14 02:21:06.113182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.476 [2024-07-14 02:21:06.113228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.476 [2024-07-14 02:21:06.113248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.476 [2024-07-14 02:21:06.125985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.476 [2024-07-14 02:21:06.126014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.476 [2024-07-14 02:21:06.126032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.476 [2024-07-14 02:21:06.138785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.476 [2024-07-14 02:21:06.138819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.476 [2024-07-14 02:21:06.138843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.476 [2024-07-14 02:21:06.151783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.476 [2024-07-14 02:21:06.151817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.476 [2024-07-14 02:21:06.151838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.476 [2024-07-14 02:21:06.164999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.476 [2024-07-14 02:21:06.165028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.476 [2024-07-14 02:21:06.165045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.734 [2024-07-14 02:21:06.177852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.734 [2024-07-14 02:21:06.177895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.734 [2024-07-14 02:21:06.177916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.734 [2024-07-14 02:21:06.190671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.734 [2024-07-14 02:21:06.190706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.734 [2024-07-14 02:21:06.190726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.734 [2024-07-14 02:21:06.203641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.734 [2024-07-14 02:21:06.203675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.734 [2024-07-14 02:21:06.203695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.734 [2024-07-14 02:21:06.216535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.734 [2024-07-14 02:21:06.216570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.734 [2024-07-14 02:21:06.216589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.734 [2024-07-14 02:21:06.229324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.734 [2024-07-14 02:21:06.229357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.734 [2024-07-14 02:21:06.229377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.734 [2024-07-14 02:21:06.242153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.734 [2024-07-14 02:21:06.242198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.734 [2024-07-14 02:21:06.242216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.734 [2024-07-14 02:21:06.255130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.734 [2024-07-14 02:21:06.255166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.734 [2024-07-14 02:21:06.255201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.734 [2024-07-14 02:21:06.268093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.734 [2024-07-14 02:21:06.268124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.734 [2024-07-14 02:21:06.268142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.734 [2024-07-14 02:21:06.280880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.734 [2024-07-14 02:21:06.280927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.734 [2024-07-14 02:21:06.280943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.734 [2024-07-14 02:21:06.293762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.734 [2024-07-14 02:21:06.293796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.734 [2024-07-14 02:21:06.293816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.734 [2024-07-14 02:21:06.306579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.734 [2024-07-14 02:21:06.306614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.734 [2024-07-14 02:21:06.306634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.734 [2024-07-14 02:21:06.319320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.734 [2024-07-14 02:21:06.319353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.734 [2024-07-14 02:21:06.319373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.734 [2024-07-14 02:21:06.332080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.734 [2024-07-14 02:21:06.332110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.734 [2024-07-14 02:21:06.332127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.734 [2024-07-14 02:21:06.344756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.734 [2024-07-14 02:21:06.344790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.734 [2024-07-14 02:21:06.344809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.734 [2024-07-14 02:21:06.357742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.734 [2024-07-14 02:21:06.357775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.734 [2024-07-14 02:21:06.357795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.734 [2024-07-14 02:21:06.370684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.735 [2024-07-14 02:21:06.370719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.735 [2024-07-14 02:21:06.370739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.735 [2024-07-14 02:21:06.383668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.735 [2024-07-14 02:21:06.383701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.735 [2024-07-14 02:21:06.383721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.735 [2024-07-14 02:21:06.396627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.735 [2024-07-14 02:21:06.396662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.735 [2024-07-14 02:21:06.396681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.735 [2024-07-14 02:21:06.409590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.735 [2024-07-14 02:21:06.409625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.735 [2024-07-14 02:21:06.409644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.735 [2024-07-14 02:21:06.422413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.735 [2024-07-14 02:21:06.422447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.735 [2024-07-14 02:21:06.422467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.994 [2024-07-14 02:21:06.435321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.994 [2024-07-14 02:21:06.435355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.994 [2024-07-14 02:21:06.435374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.994 [2024-07-14 02:21:06.448496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.994 [2024-07-14 02:21:06.448529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.994 [2024-07-14 02:21:06.448549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.994 [2024-07-14 02:21:06.461300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.994 [2024-07-14 02:21:06.461335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.994 [2024-07-14 02:21:06.461355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.994 [2024-07-14 02:21:06.474316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.994 [2024-07-14 02:21:06.474351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.994 [2024-07-14 02:21:06.474377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.994 [2024-07-14 02:21:06.487342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.994 [2024-07-14 02:21:06.487377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.994 [2024-07-14 02:21:06.487398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.994 [2024-07-14 02:21:06.500459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.994 [2024-07-14 02:21:06.500494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.994 [2024-07-14 02:21:06.500514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.994 [2024-07-14 02:21:06.513304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.994 [2024-07-14 02:21:06.513338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.994 [2024-07-14 02:21:06.513358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.994 [2024-07-14 02:21:06.526082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.994 [2024-07-14 02:21:06.526114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.994 [2024-07-14 02:21:06.526132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.994 [2024-07-14 02:21:06.538815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.994 [2024-07-14 02:21:06.538849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.994 [2024-07-14 02:21:06.538882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.994 [2024-07-14 02:21:06.551584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.994 [2024-07-14 02:21:06.551619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.994 [2024-07-14 02:21:06.551639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.994 [2024-07-14 02:21:06.564611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.994 [2024-07-14 02:21:06.564646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.994 [2024-07-14 02:21:06.564665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.994 [2024-07-14 02:21:06.577412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.994 [2024-07-14 02:21:06.577448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.994 [2024-07-14 02:21:06.577468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.994 [2024-07-14 02:21:06.590474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.994 [2024-07-14 02:21:06.590508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.994 [2024-07-14 02:21:06.590530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.994 [2024-07-14 02:21:06.603272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.994 [2024-07-14 02:21:06.603307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.994 [2024-07-14 02:21:06.603326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.994 [2024-07-14 02:21:06.616218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.994 [2024-07-14 02:21:06.616251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.994 [2024-07-14 02:21:06.616270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.995 [2024-07-14 02:21:06.629009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.995 [2024-07-14 02:21:06.629040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.995 [2024-07-14 02:21:06.629056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.995 [2024-07-14 02:21:06.641399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.995 [2024-07-14 02:21:06.641433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.995 [2024-07-14 02:21:06.641453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.995 [2024-07-14 02:21:06.654090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.995 [2024-07-14 02:21:06.654121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.995 [2024-07-14 02:21:06.654153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.995 [2024-07-14 02:21:06.666757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.995 [2024-07-14 02:21:06.666791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.995 [2024-07-14 02:21:06.666811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.995 [2024-07-14 02:21:06.679494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:00.995 [2024-07-14 02:21:06.679528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.995 [2024-07-14 02:21:06.679552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.253 [2024-07-14 02:21:06.692386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.253 [2024-07-14 02:21:06.692421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.253 [2024-07-14 02:21:06.692449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.253 [2024-07-14 02:21:06.705205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.253 [2024-07-14 02:21:06.705252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.253 [2024-07-14 02:21:06.705272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.253 [2024-07-14 02:21:06.718133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.253 [2024-07-14 02:21:06.718167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.253 [2024-07-14 02:21:06.718184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.253 [2024-07-14 02:21:06.730903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.253 [2024-07-14 02:21:06.730949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.253 [2024-07-14 02:21:06.730967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.253 [2024-07-14 02:21:06.743752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.253 [2024-07-14 02:21:06.743786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.254 [2024-07-14 02:21:06.743805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.254 [2024-07-14 02:21:06.756983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.254 [2024-07-14 02:21:06.757012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.254 [2024-07-14 02:21:06.757033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.254 [2024-07-14 02:21:06.770003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.254 [2024-07-14 02:21:06.770033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.254 [2024-07-14 02:21:06.770050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.254 [2024-07-14 02:21:06.783062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.254 [2024-07-14 02:21:06.783092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.254 [2024-07-14 02:21:06.783109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.254 [2024-07-14 02:21:06.795849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.254 [2024-07-14 02:21:06.795892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.254 [2024-07-14 02:21:06.795927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.254 [2024-07-14 02:21:06.808823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.254 [2024-07-14 02:21:06.808880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.254 [2024-07-14 02:21:06.808917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.254 [2024-07-14 02:21:06.822011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.254 [2024-07-14 02:21:06.822055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.254 [2024-07-14 02:21:06.822073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.254 [2024-07-14 02:21:06.834877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.254 [2024-07-14 02:21:06.834924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.254 [2024-07-14 02:21:06.834941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.254 [2024-07-14 02:21:06.847757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.254 [2024-07-14 02:21:06.847791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.254 [2024-07-14 02:21:06.847810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.254 [2024-07-14 02:21:06.860398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.254 [2024-07-14 02:21:06.860431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.254 [2024-07-14 02:21:06.860451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.254 [2024-07-14 02:21:06.873139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.254 [2024-07-14 02:21:06.873184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.254 [2024-07-14 02:21:06.873201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.254 [2024-07-14 02:21:06.886204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.254 [2024-07-14 02:21:06.886236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.254 [2024-07-14 02:21:06.886255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.254 [2024-07-14 02:21:06.900593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.254 [2024-07-14 02:21:06.900628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.254 [2024-07-14 02:21:06.900647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.254 [2024-07-14 02:21:06.912424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.254 [2024-07-14 02:21:06.912454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.254 [2024-07-14 02:21:06.912478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.254 [2024-07-14 02:21:06.924306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.254 [2024-07-14 02:21:06.924335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.254 [2024-07-14 02:21:06.924352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.254 [2024-07-14 02:21:06.936044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.254 [2024-07-14 02:21:06.936077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.254 [2024-07-14 02:21:06.936094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:06.948295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:06.948339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:06.948355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:06.960164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:06.960195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:06.960228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:06.972802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:06.972835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:06.972878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:06.985678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:06.985711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:06.985731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:06.998706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:06.998740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:06.998759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:07.011563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:07.011597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:07.011617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:07.024381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:07.024414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:07.024441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:07.037193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:07.037226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:07.037246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:07.049926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:07.049955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:07.049972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:07.062639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:07.062672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:07.062692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:07.075778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:07.075811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:07.075830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:07.088683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:07.088717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:07.088735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:07.101829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:07.101884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:07.101919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:07.114630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:07.114664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:07.114683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:07.127356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:07.127390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:07.127410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:07.140260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:07.140305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:07.140325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:07.153159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:07.153209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:07.153229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:07.165973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:07.166005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:07.166022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:07.178763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:07.178798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:07.178817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.514 [2024-07-14 02:21:07.191376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2483f10) 00:34:01.514 [2024-07-14 02:21:07.191411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.514 [2024-07-14 02:21:07.191436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.514 00:34:01.514 Latency(us) 00:34:01.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.514 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:01.514 nvme0n1 : 2.00 2421.62 302.70 0.00 0.00 6602.74 5728.33 13786.83 00:34:01.514 =================================================================================================================== 00:34:01.514 Total : 2421.62 302.70 0.00 0.00 6602.74 5728.33 13786.83 00:34:01.514 0 00:34:01.773 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:01.773 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:01.773 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:01.773 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:01.773 | .driver_specific 00:34:01.773 | .nvme_error 00:34:01.773 | .status_code 00:34:01.773 | .command_transient_transport_error' 00:34:02.031 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 156 > 0 )) 00:34:02.031 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1738202 00:34:02.031 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1738202 ']' 00:34:02.031 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1738202 00:34:02.031 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:02.031 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:02.031 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1738202 00:34:02.031 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1738202' 00:34:02.032 killing process with pid 1738202 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1738202 00:34:02.032 Received shutdown signal, test time was about 2.000000 seconds 00:34:02.032 00:34:02.032 Latency(us) 00:34:02.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.032 =================================================================================================================== 00:34:02.032 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1738202 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1738961 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1738961 /var/tmp/bperf.sock 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1738961 ']' 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:02.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:02.032 02:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:02.290 [2024-07-14 02:21:07.760196] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:02.290 [2024-07-14 02:21:07.760302] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738961 ] 00:34:02.290 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.290 [2024-07-14 02:21:07.826321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.290 [2024-07-14 02:21:07.918964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.548 02:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:02.548 02:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:02.548 02:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:02.548 02:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:02.806 02:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:02.806 02:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.806 02:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:02.806 02:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.806 02:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:02.806 02:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:03.064 nvme0n1 00:34:03.322 02:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:03.322 02:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.322 02:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:03.322 02:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.322 02:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:03.322 02:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:03.322 Running I/O for 2 seconds... 00:34:03.322 [2024-07-14 02:21:08.908182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.322 [2024-07-14 02:21:08.908520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.322 [2024-07-14 02:21:08.908561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.322 [2024-07-14 02:21:08.922374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.322 [2024-07-14 02:21:08.922692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.322 [2024-07-14 02:21:08.922726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.322 [2024-07-14 02:21:08.936534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.322 [2024-07-14 02:21:08.936821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.322 [2024-07-14 02:21:08.936871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.322 [2024-07-14 02:21:08.950951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.322 [2024-07-14 02:21:08.951237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.322 [2024-07-14 02:21:08.951269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.322 [2024-07-14 02:21:08.965092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.322 [2024-07-14 02:21:08.965421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.322 [2024-07-14 02:21:08.965452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.322 [2024-07-14 02:21:08.979116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.322 [2024-07-14 02:21:08.979445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.322 [2024-07-14 02:21:08.979477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.322 [2024-07-14 02:21:08.993070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.322 [2024-07-14 02:21:08.993392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.322 [2024-07-14 02:21:08.993435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.322 [2024-07-14 02:21:09.007029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.322 [2024-07-14 02:21:09.007340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.322 [2024-07-14 02:21:09.007371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.580 [2024-07-14 02:21:09.021337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.580 [2024-07-14 02:21:09.021648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.580 [2024-07-14 02:21:09.021678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.580 [2024-07-14 02:21:09.034974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.580 [2024-07-14 02:21:09.035286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.580 [2024-07-14 02:21:09.035314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.580 [2024-07-14 02:21:09.048103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.580 [2024-07-14 02:21:09.048429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.580 [2024-07-14 02:21:09.048456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.580 [2024-07-14 02:21:09.060645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.580 [2024-07-14 02:21:09.060913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.580 [2024-07-14 02:21:09.060941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.580 [2024-07-14 02:21:09.074012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.580 [2024-07-14 02:21:09.074352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.580 [2024-07-14 02:21:09.074380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.580 [2024-07-14 02:21:09.087168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.580 [2024-07-14 02:21:09.087425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.580 [2024-07-14 02:21:09.087458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.580 [2024-07-14 02:21:09.100545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.580 [2024-07-14 02:21:09.100828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.580 [2024-07-14 02:21:09.100861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.580 [2024-07-14 02:21:09.113919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.580 [2024-07-14 02:21:09.114185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.580 [2024-07-14 02:21:09.114225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.581 [2024-07-14 02:21:09.127632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.581 [2024-07-14 02:21:09.127972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.581 [2024-07-14 02:21:09.128000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.581 [2024-07-14 02:21:09.142187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.581 [2024-07-14 02:21:09.142535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.581 [2024-07-14 02:21:09.142566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.581 [2024-07-14 02:21:09.156994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.581 [2024-07-14 02:21:09.157262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.581 [2024-07-14 02:21:09.157293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.581 [2024-07-14 02:21:09.171528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.581 [2024-07-14 02:21:09.171824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.581 [2024-07-14 02:21:09.171883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.581 [2024-07-14 02:21:09.185972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.581 [2024-07-14 02:21:09.186275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.581 [2024-07-14 02:21:09.186307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.581 [2024-07-14 02:21:09.199939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.581 [2024-07-14 02:21:09.200268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.581 [2024-07-14 02:21:09.200299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.581 [2024-07-14 02:21:09.214057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.581 [2024-07-14 02:21:09.214376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.581 [2024-07-14 02:21:09.214413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.581 [2024-07-14 02:21:09.228030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.581 [2024-07-14 02:21:09.228341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.581 [2024-07-14 02:21:09.228372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.581 [2024-07-14 02:21:09.242009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.581 [2024-07-14 02:21:09.242329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.581 [2024-07-14 02:21:09.242360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.581 [2024-07-14 02:21:09.256042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.581 [2024-07-14 02:21:09.256370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.581 [2024-07-14 02:21:09.256400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.581 [2024-07-14 02:21:09.270217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.581 [2024-07-14 02:21:09.270513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.581 [2024-07-14 02:21:09.270544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.284358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.284638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.841 [2024-07-14 02:21:09.284669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.298319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.298630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.841 [2024-07-14 02:21:09.298661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.312315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.312594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.841 [2024-07-14 02:21:09.312625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.326377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.326692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.841 [2024-07-14 02:21:09.326724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.340438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.340723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.841 [2024-07-14 02:21:09.340754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.354502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.354789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.841 [2024-07-14 02:21:09.354822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.368511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.368794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.841 [2024-07-14 02:21:09.368826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.382543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.382823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.841 [2024-07-14 02:21:09.382855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.396562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.396843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.841 [2024-07-14 02:21:09.396883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.410621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.410914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.841 [2024-07-14 02:21:09.410943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.424644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.424966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.841 [2024-07-14 02:21:09.425010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.438640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.438960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.841 [2024-07-14 02:21:09.438989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.452725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.453019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.841 [2024-07-14 02:21:09.453048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.466770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.467182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.841 [2024-07-14 02:21:09.467214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.480806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.481178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.841 [2024-07-14 02:21:09.481211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.494818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.495235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.841 [2024-07-14 02:21:09.495268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.841 [2024-07-14 02:21:09.508943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.841 [2024-07-14 02:21:09.509281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.842 [2024-07-14 02:21:09.509313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.842 [2024-07-14 02:21:09.522920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:03.842 [2024-07-14 02:21:09.523275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.842 [2024-07-14 02:21:09.523307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.102 [2024-07-14 02:21:09.537284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.102 [2024-07-14 02:21:09.537621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.102 [2024-07-14 02:21:09.537653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.102 [2024-07-14 02:21:09.551274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.102 [2024-07-14 02:21:09.551585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.102 [2024-07-14 02:21:09.551617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.102 [2024-07-14 02:21:09.565187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.102 [2024-07-14 02:21:09.565499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.102 [2024-07-14 02:21:09.565530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.102 [2024-07-14 02:21:09.579189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.102 [2024-07-14 02:21:09.579476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.102 [2024-07-14 02:21:09.579513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.102 [2024-07-14 02:21:09.593202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.102 [2024-07-14 02:21:09.593534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.103 [2024-07-14 02:21:09.593566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.103 [2024-07-14 02:21:09.607242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.103 [2024-07-14 02:21:09.607556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.103 [2024-07-14 02:21:09.607588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.103 [2024-07-14 02:21:09.621327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.103 [2024-07-14 02:21:09.621604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.103 [2024-07-14 02:21:09.621636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.103 [2024-07-14 02:21:09.635351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.103 [2024-07-14 02:21:09.635661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.103 [2024-07-14 02:21:09.635693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.103 [2024-07-14 02:21:09.649338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.103 [2024-07-14 02:21:09.649617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.103 [2024-07-14 02:21:09.649648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.103 [2024-07-14 02:21:09.663360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.103 [2024-07-14 02:21:09.663644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.103 [2024-07-14 02:21:09.663677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.103 [2024-07-14 02:21:09.677430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.103 [2024-07-14 02:21:09.677740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.103 [2024-07-14 02:21:09.677782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.103 [2024-07-14 02:21:09.691403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.103 [2024-07-14 02:21:09.691685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.103 [2024-07-14 02:21:09.691716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.103 [2024-07-14 02:21:09.705451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.103 [2024-07-14 02:21:09.705744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.103 [2024-07-14 02:21:09.705775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.103 [2024-07-14 02:21:09.719575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.103 [2024-07-14 02:21:09.719853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.103 [2024-07-14 02:21:09.719893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.103 [2024-07-14 02:21:09.733632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.103 [2024-07-14 02:21:09.733925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.103 [2024-07-14 02:21:09.733954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.103 [2024-07-14 02:21:09.747748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.103 [2024-07-14 02:21:09.748077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.103 [2024-07-14 02:21:09.748106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.103 [2024-07-14 02:21:09.761795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.103 [2024-07-14 02:21:09.762155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.103 [2024-07-14 02:21:09.762198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.103 [2024-07-14 02:21:09.775758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.103 [2024-07-14 02:21:09.776164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.103 [2024-07-14 02:21:09.776192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.103 [2024-07-14 02:21:09.789736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.103 [2024-07-14 02:21:09.790074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.103 [2024-07-14 02:21:09.790102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.362 [2024-07-14 02:21:09.804005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:09.804363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:09.804394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:09.818047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:09.818374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:09.818405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:09.832010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:09.832290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:09.832321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:09.845997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:09.846319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:09.846350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:09.860087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:09.860378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:09.860409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:09.874116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:09.874438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:09.874469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:09.888101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:09.888417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:09.888449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:09.902069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:09.902389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:09.902419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:09.916082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:09.916405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:09.916435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:09.930101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:09.930424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:09.930455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:09.944109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:09.944513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:09.944545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:09.958154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:09.958475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:09.958502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:09.972091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:09.972343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:09.972370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:09.986047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:09.986365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:09.986391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:09.999894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:10.000166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:10.000193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:10.014079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:10.014384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:10.014420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:10.028161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:10.028549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:10.028579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.363 [2024-07-14 02:21:10.042447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.363 [2024-07-14 02:21:10.042808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.363 [2024-07-14 02:21:10.042837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.622 [2024-07-14 02:21:10.056917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.622 [2024-07-14 02:21:10.057274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.622 [2024-07-14 02:21:10.057321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.622 [2024-07-14 02:21:10.070675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.622 [2024-07-14 02:21:10.071016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.622 [2024-07-14 02:21:10.071052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.622 [2024-07-14 02:21:10.084877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.622 [2024-07-14 02:21:10.085151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.085178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.623 [2024-07-14 02:21:10.098858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.623 [2024-07-14 02:21:10.099161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.099187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.623 [2024-07-14 02:21:10.112889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.623 [2024-07-14 02:21:10.113158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.113201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.623 [2024-07-14 02:21:10.126844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.623 [2024-07-14 02:21:10.127128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.127154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.623 [2024-07-14 02:21:10.140920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.623 [2024-07-14 02:21:10.141172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.141200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.623 [2024-07-14 02:21:10.155075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.623 [2024-07-14 02:21:10.155366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.155394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.623 [2024-07-14 02:21:10.169217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.623 [2024-07-14 02:21:10.169563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.169589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.623 [2024-07-14 02:21:10.183430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.623 [2024-07-14 02:21:10.183749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.183777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.623 [2024-07-14 02:21:10.197374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.623 [2024-07-14 02:21:10.197739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.197769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.623 [2024-07-14 02:21:10.211353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.623 [2024-07-14 02:21:10.211665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.211693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.623 [2024-07-14 02:21:10.225322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.623 [2024-07-14 02:21:10.225598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.225624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.623 [2024-07-14 02:21:10.239275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.623 [2024-07-14 02:21:10.239593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.239636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.623 [2024-07-14 02:21:10.253279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.623 [2024-07-14 02:21:10.253555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.253582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.623 [2024-07-14 02:21:10.267346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.623 [2024-07-14 02:21:10.267665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.267692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.623 [2024-07-14 02:21:10.281293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.623 [2024-07-14 02:21:10.281571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.281598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.623 [2024-07-14 02:21:10.295230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.623 [2024-07-14 02:21:10.295508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.295535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.623 [2024-07-14 02:21:10.309222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.623 [2024-07-14 02:21:10.309616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.623 [2024-07-14 02:21:10.309657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.882 [2024-07-14 02:21:10.323340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.882 [2024-07-14 02:21:10.323660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.882 [2024-07-14 02:21:10.323687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.882 [2024-07-14 02:21:10.337345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.882 [2024-07-14 02:21:10.337721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.882 [2024-07-14 02:21:10.337747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.882 [2024-07-14 02:21:10.351345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.882 [2024-07-14 02:21:10.351619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.882 [2024-07-14 02:21:10.351646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.882 [2024-07-14 02:21:10.365359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.882 [2024-07-14 02:21:10.365643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.882 [2024-07-14 02:21:10.365671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.882 [2024-07-14 02:21:10.379314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.882 [2024-07-14 02:21:10.379608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.882 [2024-07-14 02:21:10.379634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.882 [2024-07-14 02:21:10.393320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.882 [2024-07-14 02:21:10.393642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.882 [2024-07-14 02:21:10.393668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.882 [2024-07-14 02:21:10.407288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.882 [2024-07-14 02:21:10.407638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.882 [2024-07-14 02:21:10.407665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.882 [2024-07-14 02:21:10.421243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.882 [2024-07-14 02:21:10.421592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.882 [2024-07-14 02:21:10.421618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.882 [2024-07-14 02:21:10.435330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.882 [2024-07-14 02:21:10.435618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.882 [2024-07-14 02:21:10.435649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.882 [2024-07-14 02:21:10.449138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.882 [2024-07-14 02:21:10.449499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.882 [2024-07-14 02:21:10.449543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.882 [2024-07-14 02:21:10.463117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.882 [2024-07-14 02:21:10.463406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.882 [2024-07-14 02:21:10.463432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.882 [2024-07-14 02:21:10.477071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.882 [2024-07-14 02:21:10.477395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.882 [2024-07-14 02:21:10.477420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.882 [2024-07-14 02:21:10.491079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.882 [2024-07-14 02:21:10.491351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.882 [2024-07-14 02:21:10.491377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.882 [2024-07-14 02:21:10.505031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.882 [2024-07-14 02:21:10.505386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.882 [2024-07-14 02:21:10.505429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.882 [2024-07-14 02:21:10.519168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.882 [2024-07-14 02:21:10.519479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.882 [2024-07-14 02:21:10.519505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.882 [2024-07-14 02:21:10.533252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.882 [2024-07-14 02:21:10.533525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.883 [2024-07-14 02:21:10.533552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.883 [2024-07-14 02:21:10.547196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.883 [2024-07-14 02:21:10.547487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.883 [2024-07-14 02:21:10.547528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:04.883 [2024-07-14 02:21:10.561226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:04.883 [2024-07-14 02:21:10.561508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.883 [2024-07-14 02:21:10.561535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.141 [2024-07-14 02:21:10.575592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.141 [2024-07-14 02:21:10.575910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.141 [2024-07-14 02:21:10.575958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.141 [2024-07-14 02:21:10.589620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.141 [2024-07-14 02:21:10.589933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.141 [2024-07-14 02:21:10.589975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.141 [2024-07-14 02:21:10.603570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.141 [2024-07-14 02:21:10.603846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.603882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.142 [2024-07-14 02:21:10.617612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.142 [2024-07-14 02:21:10.617892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.617919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.142 [2024-07-14 02:21:10.631614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.142 [2024-07-14 02:21:10.631983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.632009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.142 [2024-07-14 02:21:10.645620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.142 [2024-07-14 02:21:10.645894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.645921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.142 [2024-07-14 02:21:10.659634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.142 [2024-07-14 02:21:10.659988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.660014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.142 [2024-07-14 02:21:10.673552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.142 [2024-07-14 02:21:10.673843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.673877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.142 [2024-07-14 02:21:10.687481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.142 [2024-07-14 02:21:10.687789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.687821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.142 [2024-07-14 02:21:10.701285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.142 [2024-07-14 02:21:10.701560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.701590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.142 [2024-07-14 02:21:10.715132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.142 [2024-07-14 02:21:10.715469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.715499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.142 [2024-07-14 02:21:10.729109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.142 [2024-07-14 02:21:10.729402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.729432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.142 [2024-07-14 02:21:10.743125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.142 [2024-07-14 02:21:10.743442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.743472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.142 [2024-07-14 02:21:10.757095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.142 [2024-07-14 02:21:10.757423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.757453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.142 [2024-07-14 02:21:10.771105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.142 [2024-07-14 02:21:10.771425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.771455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.142 [2024-07-14 02:21:10.785204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.142 [2024-07-14 02:21:10.785515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.785544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.142 [2024-07-14 02:21:10.799105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.142 [2024-07-14 02:21:10.799422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.799452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.142 [2024-07-14 02:21:10.813047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.142 [2024-07-14 02:21:10.813400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.813430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.142 [2024-07-14 02:21:10.827028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.142 [2024-07-14 02:21:10.827342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.142 [2024-07-14 02:21:10.827372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.401 [2024-07-14 02:21:10.841345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.401 [2024-07-14 02:21:10.841658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.401 [2024-07-14 02:21:10.841688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.401 [2024-07-14 02:21:10.855343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.401 [2024-07-14 02:21:10.855623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.401 [2024-07-14 02:21:10.855652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.401 [2024-07-14 02:21:10.869280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.401 [2024-07-14 02:21:10.869557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.401 [2024-07-14 02:21:10.869587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.401 [2024-07-14 02:21:10.883346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24fac40) with pdu=0x2000190fda78 00:34:05.401 [2024-07-14 02:21:10.883626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.401 [2024-07-14 02:21:10.883656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.401 00:34:05.401 Latency(us) 00:34:05.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:05.401 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:05.401 nvme0n1 : 2.01 18167.20 70.97 0.00 0.00 7029.61 6140.97 16214.09 00:34:05.401 =================================================================================================================== 00:34:05.401 Total : 18167.20 70.97 0.00 0.00 7029.61 6140.97 16214.09 00:34:05.401 0 00:34:05.401 02:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:05.401 02:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:05.401 02:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:05.401 02:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:05.401 | .driver_specific 00:34:05.401 | .nvme_error 00:34:05.401 | .status_code 00:34:05.401 | .command_transient_transport_error' 00:34:05.660 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:34:05.660 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1738961 00:34:05.660 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1738961 ']' 00:34:05.660 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1738961 00:34:05.660 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:05.660 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:05.660 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1738961 00:34:05.660 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:05.660 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:05.660 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1738961' 00:34:05.660 killing process with pid 1738961 00:34:05.660 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1738961 00:34:05.660 Received shutdown signal, test time was about 2.000000 seconds 00:34:05.660 00:34:05.660 Latency(us) 00:34:05.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:05.660 =================================================================================================================== 00:34:05.660 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:05.660 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1738961 00:34:05.919 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:05.919 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:05.919 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:05.919 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:05.919 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:05.919 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1739640 00:34:05.919 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:05.919 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1739640 /var/tmp/bperf.sock 00:34:05.919 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1739640 ']' 00:34:05.919 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:05.919 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:05.919 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:05.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:05.919 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:05.919 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:05.919 [2024-07-14 02:21:11.444852] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:05.919 [2024-07-14 02:21:11.444941] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1739640 ] 00:34:05.919 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:05.919 Zero copy mechanism will not be used. 00:34:05.919 EAL: No free 2048 kB hugepages reported on node 1 00:34:05.919 [2024-07-14 02:21:11.507765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.919 [2024-07-14 02:21:11.595731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:06.177 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:06.177 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:06.177 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:06.177 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:06.436 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:06.436 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.436 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:06.436 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.436 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:06.436 02:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:06.694 nvme0n1 00:34:06.694 02:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:06.694 02:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.694 02:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:06.694 02:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.694 02:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:06.694 02:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:06.954 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:06.954 Zero copy mechanism will not be used. 00:34:06.954 Running I/O for 2 seconds... 00:34:06.954 [2024-07-14 02:21:12.437557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:06.954 [2024-07-14 02:21:12.437986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.954 [2024-07-14 02:21:12.438039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.954 [2024-07-14 02:21:12.454232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:06.954 [2024-07-14 02:21:12.454645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.954 [2024-07-14 02:21:12.454692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.954 [2024-07-14 02:21:12.470800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:06.954 [2024-07-14 02:21:12.471349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.954 [2024-07-14 02:21:12.471380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.955 [2024-07-14 02:21:12.489952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:06.955 [2024-07-14 02:21:12.490342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.955 [2024-07-14 02:21:12.490372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.955 [2024-07-14 02:21:12.509106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:06.955 [2024-07-14 02:21:12.509617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.955 [2024-07-14 02:21:12.509663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.955 [2024-07-14 02:21:12.527135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:06.955 [2024-07-14 02:21:12.527543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.955 [2024-07-14 02:21:12.527573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.955 [2024-07-14 02:21:12.544398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:06.955 [2024-07-14 02:21:12.544902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.955 [2024-07-14 02:21:12.544956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.955 [2024-07-14 02:21:12.563205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:06.955 [2024-07-14 02:21:12.563724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.955 [2024-07-14 02:21:12.563769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.955 [2024-07-14 02:21:12.581169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:06.955 [2024-07-14 02:21:12.581706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.955 [2024-07-14 02:21:12.581736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.955 [2024-07-14 02:21:12.598379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:06.955 [2024-07-14 02:21:12.598688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.955 [2024-07-14 02:21:12.598719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.955 [2024-07-14 02:21:12.617443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:06.955 [2024-07-14 02:21:12.618015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.955 [2024-07-14 02:21:12.618060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.955 [2024-07-14 02:21:12.634834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:06.955 [2024-07-14 02:21:12.635311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.955 [2024-07-14 02:21:12.635341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.216 [2024-07-14 02:21:12.653913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.216 [2024-07-14 02:21:12.654360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.216 [2024-07-14 02:21:12.654390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.216 [2024-07-14 02:21:12.673411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.216 [2024-07-14 02:21:12.673941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.216 [2024-07-14 02:21:12.673986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.216 [2024-07-14 02:21:12.692640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.216 [2024-07-14 02:21:12.693211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.216 [2024-07-14 02:21:12.693256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.216 [2024-07-14 02:21:12.708905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.216 [2024-07-14 02:21:12.709345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.216 [2024-07-14 02:21:12.709374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.216 [2024-07-14 02:21:12.727473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.216 [2024-07-14 02:21:12.727941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.216 [2024-07-14 02:21:12.727971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.216 [2024-07-14 02:21:12.746711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.216 [2024-07-14 02:21:12.747167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.216 [2024-07-14 02:21:12.747213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.216 [2024-07-14 02:21:12.762799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.216 [2024-07-14 02:21:12.763208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.217 [2024-07-14 02:21:12.763254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.217 [2024-07-14 02:21:12.779752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.217 [2024-07-14 02:21:12.780296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.217 [2024-07-14 02:21:12.780342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.217 [2024-07-14 02:21:12.799689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.217 [2024-07-14 02:21:12.800165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.217 [2024-07-14 02:21:12.800216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.217 [2024-07-14 02:21:12.819292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.217 [2024-07-14 02:21:12.819752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.217 [2024-07-14 02:21:12.819796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.217 [2024-07-14 02:21:12.836600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.217 [2024-07-14 02:21:12.837115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.217 [2024-07-14 02:21:12.837160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.217 [2024-07-14 02:21:12.855270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.217 [2024-07-14 02:21:12.855686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.217 [2024-07-14 02:21:12.855731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.217 [2024-07-14 02:21:12.874762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.217 [2024-07-14 02:21:12.875338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.217 [2024-07-14 02:21:12.875382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.217 [2024-07-14 02:21:12.893392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.217 [2024-07-14 02:21:12.893953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.217 [2024-07-14 02:21:12.893999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.478 [2024-07-14 02:21:12.911606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.478 [2024-07-14 02:21:12.912023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.478 [2024-07-14 02:21:12.912053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.478 [2024-07-14 02:21:12.930548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.478 [2024-07-14 02:21:12.930967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.478 [2024-07-14 02:21:12.931012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.478 [2024-07-14 02:21:12.949648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.478 [2024-07-14 02:21:12.950179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.478 [2024-07-14 02:21:12.950209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.478 [2024-07-14 02:21:12.967031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.478 [2024-07-14 02:21:12.967436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.478 [2024-07-14 02:21:12.967466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.478 [2024-07-14 02:21:12.985245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.478 [2024-07-14 02:21:12.985821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.479 [2024-07-14 02:21:12.985851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.479 [2024-07-14 02:21:13.003897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.479 [2024-07-14 02:21:13.004372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.479 [2024-07-14 02:21:13.004415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.479 [2024-07-14 02:21:13.021703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.479 [2024-07-14 02:21:13.022171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.479 [2024-07-14 02:21:13.022217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.479 [2024-07-14 02:21:13.040288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.479 [2024-07-14 02:21:13.040668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.479 [2024-07-14 02:21:13.040698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.479 [2024-07-14 02:21:13.059247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.479 [2024-07-14 02:21:13.059632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.479 [2024-07-14 02:21:13.059675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.479 [2024-07-14 02:21:13.078344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.479 [2024-07-14 02:21:13.078723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.479 [2024-07-14 02:21:13.078766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.479 [2024-07-14 02:21:13.097063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.479 [2024-07-14 02:21:13.097456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.479 [2024-07-14 02:21:13.097501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.479 [2024-07-14 02:21:13.114825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.479 [2024-07-14 02:21:13.115189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.479 [2024-07-14 02:21:13.115219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.479 [2024-07-14 02:21:13.133702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.479 [2024-07-14 02:21:13.134112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.479 [2024-07-14 02:21:13.134144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.479 [2024-07-14 02:21:13.150238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.479 [2024-07-14 02:21:13.150620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.479 [2024-07-14 02:21:13.150649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.479 [2024-07-14 02:21:13.169618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.738 [2024-07-14 02:21:13.170141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.738 [2024-07-14 02:21:13.170173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.738 [2024-07-14 02:21:13.188557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.738 [2024-07-14 02:21:13.189041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.738 [2024-07-14 02:21:13.189071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.738 [2024-07-14 02:21:13.207815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.738 [2024-07-14 02:21:13.208203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.738 [2024-07-14 02:21:13.208234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.738 [2024-07-14 02:21:13.227032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.738 [2024-07-14 02:21:13.227514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.738 [2024-07-14 02:21:13.227543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.738 [2024-07-14 02:21:13.246012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.738 [2024-07-14 02:21:13.246474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.738 [2024-07-14 02:21:13.246520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.738 [2024-07-14 02:21:13.265655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.738 [2024-07-14 02:21:13.266002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.738 [2024-07-14 02:21:13.266034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.738 [2024-07-14 02:21:13.284811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.738 [2024-07-14 02:21:13.285413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.738 [2024-07-14 02:21:13.285465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.738 [2024-07-14 02:21:13.304440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.738 [2024-07-14 02:21:13.304884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.738 [2024-07-14 02:21:13.304929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.738 [2024-07-14 02:21:13.323655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.738 [2024-07-14 02:21:13.324069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.738 [2024-07-14 02:21:13.324100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.738 [2024-07-14 02:21:13.340114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.738 [2024-07-14 02:21:13.340494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.738 [2024-07-14 02:21:13.340524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.738 [2024-07-14 02:21:13.357396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.738 [2024-07-14 02:21:13.357787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.738 [2024-07-14 02:21:13.357831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.738 [2024-07-14 02:21:13.375536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.738 [2024-07-14 02:21:13.375880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.738 [2024-07-14 02:21:13.375911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.738 [2024-07-14 02:21:13.392569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.738 [2024-07-14 02:21:13.392970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.738 [2024-07-14 02:21:13.393018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.738 [2024-07-14 02:21:13.409923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.738 [2024-07-14 02:21:13.410394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.738 [2024-07-14 02:21:13.410437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.738 [2024-07-14 02:21:13.428818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.996 [2024-07-14 02:21:13.429368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.996 [2024-07-14 02:21:13.429414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.996 [2024-07-14 02:21:13.448562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.996 [2024-07-14 02:21:13.448932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.996 [2024-07-14 02:21:13.448970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.996 [2024-07-14 02:21:13.467343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.996 [2024-07-14 02:21:13.467719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.996 [2024-07-14 02:21:13.467764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.996 [2024-07-14 02:21:13.486413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.996 [2024-07-14 02:21:13.486839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.996 [2024-07-14 02:21:13.486891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.996 [2024-07-14 02:21:13.504755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.996 [2024-07-14 02:21:13.505294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.996 [2024-07-14 02:21:13.505325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.996 [2024-07-14 02:21:13.524491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.996 [2024-07-14 02:21:13.524933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.996 [2024-07-14 02:21:13.524977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.996 [2024-07-14 02:21:13.543966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.996 [2024-07-14 02:21:13.544360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.996 [2024-07-14 02:21:13.544405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.996 [2024-07-14 02:21:13.562700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.996 [2024-07-14 02:21:13.563146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.996 [2024-07-14 02:21:13.563190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.996 [2024-07-14 02:21:13.582077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.996 [2024-07-14 02:21:13.582657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.996 [2024-07-14 02:21:13.582701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.996 [2024-07-14 02:21:13.600640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.996 [2024-07-14 02:21:13.601177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.996 [2024-07-14 02:21:13.601227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.996 [2024-07-14 02:21:13.619129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.996 [2024-07-14 02:21:13.619686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.996 [2024-07-14 02:21:13.619715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.996 [2024-07-14 02:21:13.637572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.996 [2024-07-14 02:21:13.638046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.996 [2024-07-14 02:21:13.638089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.996 [2024-07-14 02:21:13.656488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.996 [2024-07-14 02:21:13.656944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.996 [2024-07-14 02:21:13.656973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.996 [2024-07-14 02:21:13.676486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:07.996 [2024-07-14 02:21:13.676953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.996 [2024-07-14 02:21:13.676995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.256 [2024-07-14 02:21:13.696552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.256 [2024-07-14 02:21:13.696934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.256 [2024-07-14 02:21:13.696976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.256 [2024-07-14 02:21:13.715938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.256 [2024-07-14 02:21:13.716460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.256 [2024-07-14 02:21:13.716504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.256 [2024-07-14 02:21:13.735970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.256 [2024-07-14 02:21:13.736547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.256 [2024-07-14 02:21:13.736574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.256 [2024-07-14 02:21:13.755673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.256 [2024-07-14 02:21:13.756312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.256 [2024-07-14 02:21:13.756345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.256 [2024-07-14 02:21:13.776022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.256 [2024-07-14 02:21:13.776492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.256 [2024-07-14 02:21:13.776519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.256 [2024-07-14 02:21:13.795632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.256 [2024-07-14 02:21:13.796196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.256 [2024-07-14 02:21:13.796239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.256 [2024-07-14 02:21:13.813951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.256 [2024-07-14 02:21:13.814413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.256 [2024-07-14 02:21:13.814439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.256 [2024-07-14 02:21:13.835064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.256 [2024-07-14 02:21:13.835539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.256 [2024-07-14 02:21:13.835565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.256 [2024-07-14 02:21:13.855940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.256 [2024-07-14 02:21:13.856436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.256 [2024-07-14 02:21:13.856463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.256 [2024-07-14 02:21:13.875499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.256 [2024-07-14 02:21:13.875942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.256 [2024-07-14 02:21:13.875970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.256 [2024-07-14 02:21:13.895221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.256 [2024-07-14 02:21:13.895630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.256 [2024-07-14 02:21:13.895658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.256 [2024-07-14 02:21:13.914320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.256 [2024-07-14 02:21:13.914758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.256 [2024-07-14 02:21:13.914786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.256 [2024-07-14 02:21:13.933609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.256 [2024-07-14 02:21:13.934117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.256 [2024-07-14 02:21:13.934161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.516 [2024-07-14 02:21:13.952796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.516 [2024-07-14 02:21:13.953357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.516 [2024-07-14 02:21:13.953399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.516 [2024-07-14 02:21:13.971257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.516 [2024-07-14 02:21:13.971607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.516 [2024-07-14 02:21:13.971634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.516 [2024-07-14 02:21:13.990199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.516 [2024-07-14 02:21:13.990593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.516 [2024-07-14 02:21:13.990621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.516 [2024-07-14 02:21:14.009229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.516 [2024-07-14 02:21:14.009609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.516 [2024-07-14 02:21:14.009651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.516 [2024-07-14 02:21:14.028115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.516 [2024-07-14 02:21:14.028551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.516 [2024-07-14 02:21:14.028578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.516 [2024-07-14 02:21:14.048701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.516 [2024-07-14 02:21:14.048998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.516 [2024-07-14 02:21:14.049039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.516 [2024-07-14 02:21:14.068630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.516 [2024-07-14 02:21:14.069117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.516 [2024-07-14 02:21:14.069161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.516 [2024-07-14 02:21:14.085466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.516 [2024-07-14 02:21:14.085877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.516 [2024-07-14 02:21:14.085918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.516 [2024-07-14 02:21:14.104983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.516 [2024-07-14 02:21:14.105360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.516 [2024-07-14 02:21:14.105406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.516 [2024-07-14 02:21:14.124821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.516 [2024-07-14 02:21:14.125337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.517 [2024-07-14 02:21:14.125379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.517 [2024-07-14 02:21:14.141703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.517 [2024-07-14 02:21:14.142190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.517 [2024-07-14 02:21:14.142217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.517 [2024-07-14 02:21:14.160303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.517 [2024-07-14 02:21:14.160797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.517 [2024-07-14 02:21:14.160824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.517 [2024-07-14 02:21:14.178680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.517 [2024-07-14 02:21:14.179275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.517 [2024-07-14 02:21:14.179318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.517 [2024-07-14 02:21:14.199428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.517 [2024-07-14 02:21:14.199870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.517 [2024-07-14 02:21:14.199914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.775 [2024-07-14 02:21:14.218663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.775 [2024-07-14 02:21:14.219125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.775 [2024-07-14 02:21:14.219168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.775 [2024-07-14 02:21:14.235694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.775 [2024-07-14 02:21:14.236338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.775 [2024-07-14 02:21:14.236381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.775 [2024-07-14 02:21:14.254893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.775 [2024-07-14 02:21:14.255399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.775 [2024-07-14 02:21:14.255427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.775 [2024-07-14 02:21:14.274875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.775 [2024-07-14 02:21:14.275300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.775 [2024-07-14 02:21:14.275342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.775 [2024-07-14 02:21:14.293723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.775 [2024-07-14 02:21:14.294188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.775 [2024-07-14 02:21:14.294215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.775 [2024-07-14 02:21:14.312976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.775 [2024-07-14 02:21:14.313396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.775 [2024-07-14 02:21:14.313438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.775 [2024-07-14 02:21:14.333704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.775 [2024-07-14 02:21:14.334067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.775 [2024-07-14 02:21:14.334095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.775 [2024-07-14 02:21:14.352874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.775 [2024-07-14 02:21:14.353437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.775 [2024-07-14 02:21:14.353463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.775 [2024-07-14 02:21:14.373624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.775 [2024-07-14 02:21:14.374053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.775 [2024-07-14 02:21:14.374080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.775 [2024-07-14 02:21:14.393860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.775 [2024-07-14 02:21:14.394228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.775 [2024-07-14 02:21:14.394255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.775 [2024-07-14 02:21:14.414175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24faf80) with pdu=0x2000190fef90 00:34:08.775 [2024-07-14 02:21:14.414729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.775 [2024-07-14 02:21:14.414771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.775 00:34:08.775 Latency(us) 00:34:08.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:08.775 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:08.775 nvme0n1 : 2.01 1649.99 206.25 0.00 0.00 9669.21 3495.25 22330.79 00:34:08.775 =================================================================================================================== 00:34:08.775 Total : 1649.99 206.25 0.00 0.00 9669.21 3495.25 22330.79 00:34:08.775 0 00:34:08.775 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:08.775 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:08.775 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:08.775 | .driver_specific 00:34:08.775 | .nvme_error 00:34:08.775 | .status_code 00:34:08.775 | .command_transient_transport_error' 00:34:08.775 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 106 > 0 )) 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1739640 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1739640 ']' 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1739640 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1739640 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1739640' 00:34:09.342 killing process with pid 1739640 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1739640 00:34:09.342 Received shutdown signal, test time was about 2.000000 seconds 00:34:09.342 00:34:09.342 Latency(us) 00:34:09.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.342 =================================================================================================================== 00:34:09.342 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1739640 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1737677 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1737677 ']' 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1737677 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1737677 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1737677' 00:34:09.342 killing process with pid 1737677 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1737677 00:34:09.342 02:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1737677 00:34:09.601 00:34:09.601 real 0m15.115s 00:34:09.601 user 0m30.366s 00:34:09.601 sys 0m3.980s 00:34:09.601 02:21:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:09.601 02:21:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:09.601 ************************************ 00:34:09.601 END TEST nvmf_digest_error 00:34:09.601 ************************************ 00:34:09.601 02:21:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:34:09.601 02:21:15 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:09.601 02:21:15 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:09.601 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:09.601 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:34:09.601 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:09.601 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:34:09.601 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:09.601 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:09.601 rmmod nvme_tcp 00:34:09.601 rmmod nvme_fabrics 00:34:09.601 rmmod nvme_keyring 00:34:09.897 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:09.897 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:34:09.897 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:34:09.897 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1737677 ']' 00:34:09.897 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1737677 00:34:09.897 02:21:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1737677 ']' 00:34:09.897 02:21:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1737677 00:34:09.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1737677) - No such process 00:34:09.897 02:21:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1737677 is not found' 00:34:09.897 Process with pid 1737677 is not found 00:34:09.897 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:09.897 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:09.897 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:09.897 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:09.897 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:09.897 02:21:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.897 02:21:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:09.897 02:21:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.830 02:21:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:11.830 00:34:11.830 real 0m34.452s 00:34:11.830 user 1m1.483s 00:34:11.830 sys 0m9.229s 00:34:11.830 02:21:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:11.830 02:21:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:11.830 ************************************ 00:34:11.830 END TEST nvmf_digest 00:34:11.830 ************************************ 00:34:11.830 02:21:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:11.830 02:21:17 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:34:11.830 02:21:17 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:34:11.830 02:21:17 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:34:11.830 02:21:17 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:11.830 02:21:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:11.830 02:21:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:11.830 02:21:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:11.830 ************************************ 00:34:11.830 START TEST nvmf_bdevperf 00:34:11.830 ************************************ 00:34:11.830 02:21:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:11.830 * Looking for test storage... 00:34:11.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:11.830 02:21:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:11.830 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:11.830 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:11.830 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:11.830 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:11.830 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:11.830 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:11.830 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:11.830 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:11.830 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:11.830 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:11.830 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:11.830 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:11.830 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:11.831 02:21:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:13.736 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:13.736 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:13.736 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:13.736 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:13.736 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:13.737 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:13.737 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:13.737 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:13.737 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:13.737 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:13.737 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:13.737 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:13.737 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:13.737 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:13.737 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:13.737 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:13.737 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:13.737 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:13.737 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:13.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:13.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:34:13.997 00:34:13.997 --- 10.0.0.2 ping statistics --- 00:34:13.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.997 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:13.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:13.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:34:13.997 00:34:13.997 --- 10.0.0.1 ping statistics --- 00:34:13.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.997 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1741990 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1741990 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1741990 ']' 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:13.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:13.997 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:13.997 [2024-07-14 02:21:19.510475] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:13.997 [2024-07-14 02:21:19.510547] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:13.997 EAL: No free 2048 kB hugepages reported on node 1 00:34:13.997 [2024-07-14 02:21:19.576197] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:13.997 [2024-07-14 02:21:19.667805] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:13.997 [2024-07-14 02:21:19.667863] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:13.997 [2024-07-14 02:21:19.667898] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:13.997 [2024-07-14 02:21:19.667911] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:13.997 [2024-07-14 02:21:19.667921] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:13.997 [2024-07-14 02:21:19.667977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:13.997 [2024-07-14 02:21:19.668038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:13.997 [2024-07-14 02:21:19.668040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:14.256 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.257 [2024-07-14 02:21:19.810205] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.257 Malloc0 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.257 [2024-07-14 02:21:19.876452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:14.257 { 00:34:14.257 "params": { 00:34:14.257 "name": "Nvme$subsystem", 00:34:14.257 "trtype": "$TEST_TRANSPORT", 00:34:14.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:14.257 "adrfam": "ipv4", 00:34:14.257 "trsvcid": "$NVMF_PORT", 00:34:14.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:14.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:14.257 "hdgst": ${hdgst:-false}, 00:34:14.257 "ddgst": ${ddgst:-false} 00:34:14.257 }, 00:34:14.257 "method": "bdev_nvme_attach_controller" 00:34:14.257 } 00:34:14.257 EOF 00:34:14.257 )") 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:14.257 02:21:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:14.257 "params": { 00:34:14.257 "name": "Nvme1", 00:34:14.257 "trtype": "tcp", 00:34:14.257 "traddr": "10.0.0.2", 00:34:14.257 "adrfam": "ipv4", 00:34:14.257 "trsvcid": "4420", 00:34:14.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:14.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:14.257 "hdgst": false, 00:34:14.257 "ddgst": false 00:34:14.257 }, 00:34:14.257 "method": "bdev_nvme_attach_controller" 00:34:14.257 }' 00:34:14.257 [2024-07-14 02:21:19.926162] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:14.257 [2024-07-14 02:21:19.926241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742019 ] 00:34:14.515 EAL: No free 2048 kB hugepages reported on node 1 00:34:14.515 [2024-07-14 02:21:19.988359] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.515 [2024-07-14 02:21:20.082726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.774 Running I/O for 1 seconds... 00:34:15.711 00:34:15.711 Latency(us) 00:34:15.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.711 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:15.711 Verification LBA range: start 0x0 length 0x4000 00:34:15.711 Nvme1n1 : 1.01 8860.58 34.61 0.00 0.00 14385.08 2694.26 13689.74 00:34:15.711 =================================================================================================================== 00:34:15.711 Total : 8860.58 34.61 0.00 0.00 14385.08 2694.26 13689.74 00:34:15.969 02:21:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1742271 00:34:15.969 02:21:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:15.969 02:21:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:15.969 02:21:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:15.969 02:21:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:15.969 02:21:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:15.969 02:21:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:15.969 02:21:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:15.969 { 00:34:15.969 "params": { 00:34:15.969 "name": "Nvme$subsystem", 00:34:15.969 "trtype": "$TEST_TRANSPORT", 00:34:15.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.969 "adrfam": "ipv4", 00:34:15.969 "trsvcid": "$NVMF_PORT", 00:34:15.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.969 "hdgst": ${hdgst:-false}, 00:34:15.969 "ddgst": ${ddgst:-false} 00:34:15.969 }, 00:34:15.969 "method": "bdev_nvme_attach_controller" 00:34:15.969 } 00:34:15.969 EOF 00:34:15.969 )") 00:34:15.969 02:21:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:15.969 02:21:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:15.969 02:21:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:15.969 02:21:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:15.969 "params": { 00:34:15.969 "name": "Nvme1", 00:34:15.969 "trtype": "tcp", 00:34:15.969 "traddr": "10.0.0.2", 00:34:15.969 "adrfam": "ipv4", 00:34:15.969 "trsvcid": "4420", 00:34:15.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:15.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:15.969 "hdgst": false, 00:34:15.969 "ddgst": false 00:34:15.969 }, 00:34:15.969 "method": "bdev_nvme_attach_controller" 00:34:15.969 }' 00:34:15.969 [2024-07-14 02:21:21.560095] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:15.969 [2024-07-14 02:21:21.560204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742271 ] 00:34:15.969 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.969 [2024-07-14 02:21:21.620353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.227 [2024-07-14 02:21:21.704844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.484 Running I/O for 15 seconds... 00:34:19.016 02:21:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1741990 00:34:19.016 02:21:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:19.016 [2024-07-14 02:21:24.532737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.016 [2024-07-14 02:21:24.532791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.016 [2024-07-14 02:21:24.532827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.016 [2024-07-14 02:21:24.532846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.016 [2024-07-14 02:21:24.532875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.016 [2024-07-14 02:21:24.532911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.016 [2024-07-14 02:21:24.532929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.016 [2024-07-14 02:21:24.532944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.016 [2024-07-14 02:21:24.532962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.016 [2024-07-14 02:21:24.532977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.016 [2024-07-14 02:21:24.532994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.016 [2024-07-14 02:21:24.533011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.016 [2024-07-14 02:21:24.533035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.016 [2024-07-14 02:21:24.533051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.016 [2024-07-14 02:21:24.533068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.016 [2024-07-14 02:21:24.533086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.016 [2024-07-14 02:21:24.533106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.016 [2024-07-14 02:21:24.533122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.016 [2024-07-14 02:21:24.533156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.016 [2024-07-14 02:21:24.533175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.016 [2024-07-14 02:21:24.533194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.016 [2024-07-14 02:21:24.533211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.016 [2024-07-14 02:21:24.533229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.016 [2024-07-14 02:21:24.533246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.016 [2024-07-14 02:21:24.533263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.016 [2024-07-14 02:21:24.533278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.016 [2024-07-14 02:21:24.533295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.016 [2024-07-14 02:21:24.533309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.533341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.533373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.533405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.533439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.533477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.533511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.533551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.533583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.533615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.533647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.533679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.017 [2024-07-14 02:21:24.533710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.017 [2024-07-14 02:21:24.533743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.017 [2024-07-14 02:21:24.533774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.017 [2024-07-14 02:21:24.533806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.017 [2024-07-14 02:21:24.533837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.017 [2024-07-14 02:21:24.533877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.017 [2024-07-14 02:21:24.533931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.017 [2024-07-14 02:21:24.533959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.533974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.533987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.534015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.534043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.534071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.534100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.534128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.534174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.534215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.017 [2024-07-14 02:21:24.534247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.017 [2024-07-14 02:21:24.534279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.017 [2024-07-14 02:21:24.534314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.017 [2024-07-14 02:21:24.534347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.017 [2024-07-14 02:21:24.534380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.017 [2024-07-14 02:21:24.534411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.017 [2024-07-14 02:21:24.534444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.017 [2024-07-14 02:21:24.534475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.534509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.534541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.534574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.534606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.534638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.534671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.017 [2024-07-14 02:21:24.534703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.017 [2024-07-14 02:21:24.534720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.534739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.534756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.534771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.534788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.534804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.534821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.534835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.534852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.534875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.534893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.534923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.534939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.534952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.534967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.534981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.534995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.535984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.535997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.536016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.536029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.536044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.536057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.536071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.018 [2024-07-14 02:21:24.536083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.536098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.018 [2024-07-14 02:21:24.536111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.018 [2024-07-14 02:21:24.536126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.018 [2024-07-14 02:21:24.536154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.019 [2024-07-14 02:21:24.536192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.019 [2024-07-14 02:21:24.536224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.019 [2024-07-14 02:21:24.536266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.019 [2024-07-14 02:21:24.536299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.019 [2024-07-14 02:21:24.536331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.019 [2024-07-14 02:21:24.536363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.019 [2024-07-14 02:21:24.536395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.019 [2024-07-14 02:21:24.536432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.019 [2024-07-14 02:21:24.536465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.019 [2024-07-14 02:21:24.536497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.019 [2024-07-14 02:21:24.536530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.019 [2024-07-14 02:21:24.536561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.019 [2024-07-14 02:21:24.536594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.019 [2024-07-14 02:21:24.536626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.019 [2024-07-14 02:21:24.536659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.019 [2024-07-14 02:21:24.536691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.019 [2024-07-14 02:21:24.536724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.019 [2024-07-14 02:21:24.536758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.019 [2024-07-14 02:21:24.536791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.019 [2024-07-14 02:21:24.536824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:19.019 [2024-07-14 02:21:24.536863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.019 [2024-07-14 02:21:24.536933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.019 [2024-07-14 02:21:24.536962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.536976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.019 [2024-07-14 02:21:24.536989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.537004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.019 [2024-07-14 02:21:24.537017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.537031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.019 [2024-07-14 02:21:24.537044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.537059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.019 [2024-07-14 02:21:24.537071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.537086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:19.019 [2024-07-14 02:21:24.537098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.537112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4050 is same with the state(5) to be set 00:34:19.019 [2024-07-14 02:21:24.537128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:19.019 [2024-07-14 02:21:24.537140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:19.019 [2024-07-14 02:21:24.537180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48560 len:8 PRP1 0x0 PRP2 0x0 00:34:19.019 [2024-07-14 02:21:24.537194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.019 [2024-07-14 02:21:24.537263] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bf4050 was disconnected and freed. reset controller. 00:34:19.019 [2024-07-14 02:21:24.541081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.019 [2024-07-14 02:21:24.541172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.019 [2024-07-14 02:21:24.542005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-14 02:21:24.542037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.019 [2024-07-14 02:21:24.542057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.019 [2024-07-14 02:21:24.542297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.019 [2024-07-14 02:21:24.542548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.019 [2024-07-14 02:21:24.542572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.019 [2024-07-14 02:21:24.542591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.019 [2024-07-14 02:21:24.546169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.019 [2024-07-14 02:21:24.555242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.019 [2024-07-14 02:21:24.555687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-14 02:21:24.555720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.019 [2024-07-14 02:21:24.555739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.019 [2024-07-14 02:21:24.555990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.019 [2024-07-14 02:21:24.556234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.019 [2024-07-14 02:21:24.556260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.019 [2024-07-14 02:21:24.556276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.019 [2024-07-14 02:21:24.559842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.019 [2024-07-14 02:21:24.569105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.019 [2024-07-14 02:21:24.569580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.019 [2024-07-14 02:21:24.569613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.019 [2024-07-14 02:21:24.569631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.019 [2024-07-14 02:21:24.569882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.019 [2024-07-14 02:21:24.570125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.019 [2024-07-14 02:21:24.570150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.019 [2024-07-14 02:21:24.570166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.019 [2024-07-14 02:21:24.573735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.019 [2024-07-14 02:21:24.582999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.020 [2024-07-14 02:21:24.583479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-14 02:21:24.583511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.020 [2024-07-14 02:21:24.583529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.020 [2024-07-14 02:21:24.583767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.020 [2024-07-14 02:21:24.584023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.020 [2024-07-14 02:21:24.584050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.020 [2024-07-14 02:21:24.584066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.020 [2024-07-14 02:21:24.587638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.020 [2024-07-14 02:21:24.596904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.020 [2024-07-14 02:21:24.597340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-14 02:21:24.597371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.020 [2024-07-14 02:21:24.597389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.020 [2024-07-14 02:21:24.597627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.020 [2024-07-14 02:21:24.597882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.020 [2024-07-14 02:21:24.597908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.020 [2024-07-14 02:21:24.597924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.020 [2024-07-14 02:21:24.601486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.020 [2024-07-14 02:21:24.610752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.020 [2024-07-14 02:21:24.611226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-14 02:21:24.611253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.020 [2024-07-14 02:21:24.611269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.020 [2024-07-14 02:21:24.611516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.020 [2024-07-14 02:21:24.611759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.020 [2024-07-14 02:21:24.611784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.020 [2024-07-14 02:21:24.611800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.020 [2024-07-14 02:21:24.615377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.020 [2024-07-14 02:21:24.624635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.020 [2024-07-14 02:21:24.625102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-14 02:21:24.625135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.020 [2024-07-14 02:21:24.625153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.020 [2024-07-14 02:21:24.625392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.020 [2024-07-14 02:21:24.625634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.020 [2024-07-14 02:21:24.625659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.020 [2024-07-14 02:21:24.625675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.020 [2024-07-14 02:21:24.629251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.020 [2024-07-14 02:21:24.638533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.020 [2024-07-14 02:21:24.638996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-14 02:21:24.639034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.020 [2024-07-14 02:21:24.639053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.020 [2024-07-14 02:21:24.639292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.020 [2024-07-14 02:21:24.639534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.020 [2024-07-14 02:21:24.639559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.020 [2024-07-14 02:21:24.639575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.020 [2024-07-14 02:21:24.643151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.020 [2024-07-14 02:21:24.652405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.020 [2024-07-14 02:21:24.652947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-14 02:21:24.652980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.020 [2024-07-14 02:21:24.652998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.020 [2024-07-14 02:21:24.653237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.020 [2024-07-14 02:21:24.653478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.020 [2024-07-14 02:21:24.653504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.020 [2024-07-14 02:21:24.653521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.020 [2024-07-14 02:21:24.657099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.020 [2024-07-14 02:21:24.666359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.020 [2024-07-14 02:21:24.666832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-14 02:21:24.666859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.020 [2024-07-14 02:21:24.666900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.020 [2024-07-14 02:21:24.667158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.020 [2024-07-14 02:21:24.667401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.020 [2024-07-14 02:21:24.667425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.020 [2024-07-14 02:21:24.667441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.020 [2024-07-14 02:21:24.671018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.020 [2024-07-14 02:21:24.680293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.020 [2024-07-14 02:21:24.680756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-14 02:21:24.680789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.020 [2024-07-14 02:21:24.680807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.020 [2024-07-14 02:21:24.681059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.020 [2024-07-14 02:21:24.681308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.020 [2024-07-14 02:21:24.681337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.020 [2024-07-14 02:21:24.681353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.020 [2024-07-14 02:21:24.684925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.020 [2024-07-14 02:21:24.694183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.020 [2024-07-14 02:21:24.694626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.020 [2024-07-14 02:21:24.694656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.020 [2024-07-14 02:21:24.694672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.020 [2024-07-14 02:21:24.694931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.020 [2024-07-14 02:21:24.695175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.020 [2024-07-14 02:21:24.695200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.020 [2024-07-14 02:21:24.695216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.020 [2024-07-14 02:21:24.698790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.281 [2024-07-14 02:21:24.708244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.281 [2024-07-14 02:21:24.708712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-14 02:21:24.708746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.281 [2024-07-14 02:21:24.708765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.281 [2024-07-14 02:21:24.709014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.281 [2024-07-14 02:21:24.709258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.281 [2024-07-14 02:21:24.709283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.281 [2024-07-14 02:21:24.709299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.281 [2024-07-14 02:21:24.712879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.281 [2024-07-14 02:21:24.722107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.281 [2024-07-14 02:21:24.722752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-14 02:21:24.722785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.281 [2024-07-14 02:21:24.722804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.281 [2024-07-14 02:21:24.723053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.281 [2024-07-14 02:21:24.723298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.281 [2024-07-14 02:21:24.723323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.281 [2024-07-14 02:21:24.723339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.281 [2024-07-14 02:21:24.726919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.281 [2024-07-14 02:21:24.736005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.281 [2024-07-14 02:21:24.736469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-14 02:21:24.736501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.281 [2024-07-14 02:21:24.736519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.281 [2024-07-14 02:21:24.736758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.281 [2024-07-14 02:21:24.737012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.281 [2024-07-14 02:21:24.737037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.281 [2024-07-14 02:21:24.737053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.281 [2024-07-14 02:21:24.740619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.281 [2024-07-14 02:21:24.749899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.281 [2024-07-14 02:21:24.750369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-14 02:21:24.750397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.281 [2024-07-14 02:21:24.750413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.281 [2024-07-14 02:21:24.750662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.281 [2024-07-14 02:21:24.750917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.281 [2024-07-14 02:21:24.750942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.281 [2024-07-14 02:21:24.750958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.281 [2024-07-14 02:21:24.754519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.281 [2024-07-14 02:21:24.763776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.281 [2024-07-14 02:21:24.764238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-14 02:21:24.764269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.281 [2024-07-14 02:21:24.764287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.281 [2024-07-14 02:21:24.764526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.281 [2024-07-14 02:21:24.764769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.281 [2024-07-14 02:21:24.764794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.281 [2024-07-14 02:21:24.764809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.281 [2024-07-14 02:21:24.768395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.281 [2024-07-14 02:21:24.777661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.281 [2024-07-14 02:21:24.778134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-14 02:21:24.778167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.281 [2024-07-14 02:21:24.778190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.281 [2024-07-14 02:21:24.778430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.281 [2024-07-14 02:21:24.778673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.281 [2024-07-14 02:21:24.778698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.281 [2024-07-14 02:21:24.778714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.281 [2024-07-14 02:21:24.782292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.281 [2024-07-14 02:21:24.791552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.281 [2024-07-14 02:21:24.792017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-14 02:21:24.792045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.281 [2024-07-14 02:21:24.792061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.281 [2024-07-14 02:21:24.792315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.281 [2024-07-14 02:21:24.792559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.281 [2024-07-14 02:21:24.792583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.281 [2024-07-14 02:21:24.792599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.281 [2024-07-14 02:21:24.796175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.281 [2024-07-14 02:21:24.805448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.281 [2024-07-14 02:21:24.805891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-14 02:21:24.805939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.281 [2024-07-14 02:21:24.805957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.281 [2024-07-14 02:21:24.806186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.281 [2024-07-14 02:21:24.806445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.281 [2024-07-14 02:21:24.806472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.281 [2024-07-14 02:21:24.806488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.281 [2024-07-14 02:21:24.810043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.281 [2024-07-14 02:21:24.819479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.281 [2024-07-14 02:21:24.819969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-14 02:21:24.819998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.281 [2024-07-14 02:21:24.820014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.281 [2024-07-14 02:21:24.820265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.281 [2024-07-14 02:21:24.820510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.281 [2024-07-14 02:21:24.820541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.281 [2024-07-14 02:21:24.820558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.281 [2024-07-14 02:21:24.824184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.281 [2024-07-14 02:21:24.833415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.281 [2024-07-14 02:21:24.833861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-14 02:21:24.833928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.281 [2024-07-14 02:21:24.833946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.281 [2024-07-14 02:21:24.834162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.282 [2024-07-14 02:21:24.834408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.282 [2024-07-14 02:21:24.834430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.282 [2024-07-14 02:21:24.834443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.282 [2024-07-14 02:21:24.838018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.282 [2024-07-14 02:21:24.847413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.282 [2024-07-14 02:21:24.847884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-14 02:21:24.847931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.282 [2024-07-14 02:21:24.847948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.282 [2024-07-14 02:21:24.848177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.282 [2024-07-14 02:21:24.848421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.282 [2024-07-14 02:21:24.848447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.282 [2024-07-14 02:21:24.848463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.282 [2024-07-14 02:21:24.852077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.282 [2024-07-14 02:21:24.861392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.282 [2024-07-14 02:21:24.861885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-14 02:21:24.861936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.282 [2024-07-14 02:21:24.861953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.282 [2024-07-14 02:21:24.862192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.282 [2024-07-14 02:21:24.862436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.282 [2024-07-14 02:21:24.862461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.282 [2024-07-14 02:21:24.862478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.282 [2024-07-14 02:21:24.866059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.282 [2024-07-14 02:21:24.875333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.282 [2024-07-14 02:21:24.875845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-14 02:21:24.875905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.282 [2024-07-14 02:21:24.875925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.282 [2024-07-14 02:21:24.876166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.282 [2024-07-14 02:21:24.876420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.282 [2024-07-14 02:21:24.876446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.282 [2024-07-14 02:21:24.876462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.282 [2024-07-14 02:21:24.880034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.282 [2024-07-14 02:21:24.889297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.282 [2024-07-14 02:21:24.889796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-14 02:21:24.889828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.282 [2024-07-14 02:21:24.889846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.282 [2024-07-14 02:21:24.890093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.282 [2024-07-14 02:21:24.890337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.282 [2024-07-14 02:21:24.890362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.282 [2024-07-14 02:21:24.890379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.282 [2024-07-14 02:21:24.893951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.282 [2024-07-14 02:21:24.903212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.282 [2024-07-14 02:21:24.903676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-14 02:21:24.903708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.282 [2024-07-14 02:21:24.903726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.282 [2024-07-14 02:21:24.903979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.282 [2024-07-14 02:21:24.904229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.282 [2024-07-14 02:21:24.904255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.282 [2024-07-14 02:21:24.904271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.282 [2024-07-14 02:21:24.907833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.282 [2024-07-14 02:21:24.917092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.282 [2024-07-14 02:21:24.917561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-14 02:21:24.917593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.282 [2024-07-14 02:21:24.917610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.282 [2024-07-14 02:21:24.917855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.282 [2024-07-14 02:21:24.918111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.282 [2024-07-14 02:21:24.918137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.282 [2024-07-14 02:21:24.918154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.282 [2024-07-14 02:21:24.921719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.282 [2024-07-14 02:21:24.931036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.282 [2024-07-14 02:21:24.931504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-14 02:21:24.931536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.282 [2024-07-14 02:21:24.931554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.282 [2024-07-14 02:21:24.931793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.282 [2024-07-14 02:21:24.932049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.282 [2024-07-14 02:21:24.932075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.282 [2024-07-14 02:21:24.932091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.282 [2024-07-14 02:21:24.935656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.282 [2024-07-14 02:21:24.944921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.282 [2024-07-14 02:21:24.945382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-14 02:21:24.945415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.282 [2024-07-14 02:21:24.945433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.282 [2024-07-14 02:21:24.945671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.282 [2024-07-14 02:21:24.945926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.282 [2024-07-14 02:21:24.945952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.282 [2024-07-14 02:21:24.945968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.282 [2024-07-14 02:21:24.949532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.282 [2024-07-14 02:21:24.958797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.282 [2024-07-14 02:21:24.959268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-14 02:21:24.959300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.282 [2024-07-14 02:21:24.959319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.282 [2024-07-14 02:21:24.959557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.282 [2024-07-14 02:21:24.959800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.282 [2024-07-14 02:21:24.959826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.282 [2024-07-14 02:21:24.959848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.282 [2024-07-14 02:21:24.963426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.541 [2024-07-14 02:21:24.972938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.541 [2024-07-14 02:21:24.973379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-14 02:21:24.973407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.541 [2024-07-14 02:21:24.973423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.541 [2024-07-14 02:21:24.973662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.541 [2024-07-14 02:21:24.973920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.541 [2024-07-14 02:21:24.973946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.541 [2024-07-14 02:21:24.973963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.541 [2024-07-14 02:21:24.977619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.541 [2024-07-14 02:21:24.986908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.541 [2024-07-14 02:21:24.987353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-14 02:21:24.987387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.541 [2024-07-14 02:21:24.987405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.541 [2024-07-14 02:21:24.987644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.541 [2024-07-14 02:21:24.987901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.541 [2024-07-14 02:21:24.987927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.541 [2024-07-14 02:21:24.987944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.541 [2024-07-14 02:21:24.991512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.541 [2024-07-14 02:21:25.000772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.541 [2024-07-14 02:21:25.001309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-14 02:21:25.001361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.541 [2024-07-14 02:21:25.001379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.541 [2024-07-14 02:21:25.001617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.541 [2024-07-14 02:21:25.001859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.541 [2024-07-14 02:21:25.001896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.541 [2024-07-14 02:21:25.001913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.541 [2024-07-14 02:21:25.005479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.541 [2024-07-14 02:21:25.014743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.541 [2024-07-14 02:21:25.015193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-14 02:21:25.015226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.541 [2024-07-14 02:21:25.015244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.541 [2024-07-14 02:21:25.015483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.541 [2024-07-14 02:21:25.015728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.541 [2024-07-14 02:21:25.015753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.541 [2024-07-14 02:21:25.015769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.541 [2024-07-14 02:21:25.019345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.541 [2024-07-14 02:21:25.028596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.541 [2024-07-14 02:21:25.029120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-14 02:21:25.029148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.541 [2024-07-14 02:21:25.029164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.541 [2024-07-14 02:21:25.029419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.541 [2024-07-14 02:21:25.029662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.541 [2024-07-14 02:21:25.029688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.541 [2024-07-14 02:21:25.029704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.541 [2024-07-14 02:21:25.033279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.541 [2024-07-14 02:21:25.042539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.541 [2024-07-14 02:21:25.042996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-14 02:21:25.043028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.541 [2024-07-14 02:21:25.043047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.541 [2024-07-14 02:21:25.043285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.541 [2024-07-14 02:21:25.043527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.541 [2024-07-14 02:21:25.043552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.541 [2024-07-14 02:21:25.043569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.541 [2024-07-14 02:21:25.047145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.541 [2024-07-14 02:21:25.056432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.541 [2024-07-14 02:21:25.056940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-14 02:21:25.056973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.541 [2024-07-14 02:21:25.056992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.541 [2024-07-14 02:21:25.057232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.541 [2024-07-14 02:21:25.057482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.541 [2024-07-14 02:21:25.057508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.541 [2024-07-14 02:21:25.057524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.541 [2024-07-14 02:21:25.061101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.541 [2024-07-14 02:21:25.070367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.541 [2024-07-14 02:21:25.070833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-14 02:21:25.070861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.541 [2024-07-14 02:21:25.070891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.541 [2024-07-14 02:21:25.071146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.541 [2024-07-14 02:21:25.071389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.541 [2024-07-14 02:21:25.071414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.541 [2024-07-14 02:21:25.071429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.541 [2024-07-14 02:21:25.075006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.541 [2024-07-14 02:21:25.084266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.541 [2024-07-14 02:21:25.084745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-14 02:21:25.084779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.541 [2024-07-14 02:21:25.084797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.541 [2024-07-14 02:21:25.085049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.541 [2024-07-14 02:21:25.085293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.541 [2024-07-14 02:21:25.085319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.541 [2024-07-14 02:21:25.085335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.542 [2024-07-14 02:21:25.088906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.542 [2024-07-14 02:21:25.098169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.542 [2024-07-14 02:21:25.098633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-14 02:21:25.098665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.542 [2024-07-14 02:21:25.098683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.542 [2024-07-14 02:21:25.098936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.542 [2024-07-14 02:21:25.099179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.542 [2024-07-14 02:21:25.099204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.542 [2024-07-14 02:21:25.099221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.542 [2024-07-14 02:21:25.102793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.542 [2024-07-14 02:21:25.112058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.542 [2024-07-14 02:21:25.112528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-14 02:21:25.112560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.542 [2024-07-14 02:21:25.112578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.542 [2024-07-14 02:21:25.112816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.542 [2024-07-14 02:21:25.113071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.542 [2024-07-14 02:21:25.113097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.542 [2024-07-14 02:21:25.113114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.542 [2024-07-14 02:21:25.116682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.542 [2024-07-14 02:21:25.125962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.542 [2024-07-14 02:21:25.126430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-14 02:21:25.126458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.542 [2024-07-14 02:21:25.126475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.542 [2024-07-14 02:21:25.126723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.542 [2024-07-14 02:21:25.126979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.542 [2024-07-14 02:21:25.127005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.542 [2024-07-14 02:21:25.127020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.542 [2024-07-14 02:21:25.130581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.542 [2024-07-14 02:21:25.139881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.542 [2024-07-14 02:21:25.140351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-14 02:21:25.140384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.542 [2024-07-14 02:21:25.140402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.542 [2024-07-14 02:21:25.140641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.542 [2024-07-14 02:21:25.140893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.542 [2024-07-14 02:21:25.140918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.542 [2024-07-14 02:21:25.140934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.542 [2024-07-14 02:21:25.144496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.542 [2024-07-14 02:21:25.153747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.542 [2024-07-14 02:21:25.154214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-14 02:21:25.154246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.542 [2024-07-14 02:21:25.154273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.542 [2024-07-14 02:21:25.154512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.542 [2024-07-14 02:21:25.154756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.542 [2024-07-14 02:21:25.154780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.542 [2024-07-14 02:21:25.154796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.542 [2024-07-14 02:21:25.158364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.542 [2024-07-14 02:21:25.167618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.542 [2024-07-14 02:21:25.168098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-14 02:21:25.168141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.542 [2024-07-14 02:21:25.168159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.542 [2024-07-14 02:21:25.168397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.542 [2024-07-14 02:21:25.168640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.542 [2024-07-14 02:21:25.168665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.542 [2024-07-14 02:21:25.168681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.542 [2024-07-14 02:21:25.172266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.542 [2024-07-14 02:21:25.181536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.542 [2024-07-14 02:21:25.182016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-14 02:21:25.182048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.542 [2024-07-14 02:21:25.182066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.542 [2024-07-14 02:21:25.182305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.542 [2024-07-14 02:21:25.182547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.542 [2024-07-14 02:21:25.182573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.542 [2024-07-14 02:21:25.182588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.542 [2024-07-14 02:21:25.186164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.542 [2024-07-14 02:21:25.195430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.542 [2024-07-14 02:21:25.195975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-14 02:21:25.196007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.542 [2024-07-14 02:21:25.196025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.542 [2024-07-14 02:21:25.196264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.542 [2024-07-14 02:21:25.196508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.542 [2024-07-14 02:21:25.196537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.542 [2024-07-14 02:21:25.196554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.542 [2024-07-14 02:21:25.200130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.542 [2024-07-14 02:21:25.209394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.542 [2024-07-14 02:21:25.209968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-14 02:21:25.210000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.542 [2024-07-14 02:21:25.210018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.542 [2024-07-14 02:21:25.210257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.542 [2024-07-14 02:21:25.210501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.542 [2024-07-14 02:21:25.210525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.542 [2024-07-14 02:21:25.210541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.542 [2024-07-14 02:21:25.214112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.542 [2024-07-14 02:21:25.223374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.542 [2024-07-14 02:21:25.223902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-14 02:21:25.223935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.542 [2024-07-14 02:21:25.223953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.542 [2024-07-14 02:21:25.224192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.542 [2024-07-14 02:21:25.224435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.542 [2024-07-14 02:21:25.224460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.542 [2024-07-14 02:21:25.224476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.542 [2024-07-14 02:21:25.228052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.801 [2024-07-14 02:21:25.237479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.801 [2024-07-14 02:21:25.237967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-14 02:21:25.237997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.801 [2024-07-14 02:21:25.238013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.801 [2024-07-14 02:21:25.238270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.801 [2024-07-14 02:21:25.238515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.801 [2024-07-14 02:21:25.238539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.801 [2024-07-14 02:21:25.238555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.801 [2024-07-14 02:21:25.242140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.801 [2024-07-14 02:21:25.251446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.801 [2024-07-14 02:21:25.251904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-14 02:21:25.251943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.801 [2024-07-14 02:21:25.251962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.801 [2024-07-14 02:21:25.252201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.801 [2024-07-14 02:21:25.252444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.801 [2024-07-14 02:21:25.252470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.801 [2024-07-14 02:21:25.252486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.801 [2024-07-14 02:21:25.256065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.801 [2024-07-14 02:21:25.265329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.801 [2024-07-14 02:21:25.265854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-14 02:21:25.265894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.801 [2024-07-14 02:21:25.265912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.801 [2024-07-14 02:21:25.266151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.801 [2024-07-14 02:21:25.266394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.801 [2024-07-14 02:21:25.266418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.801 [2024-07-14 02:21:25.266435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.801 [2024-07-14 02:21:25.270009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.801 [2024-07-14 02:21:25.279331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.801 [2024-07-14 02:21:25.279787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-14 02:21:25.279815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.801 [2024-07-14 02:21:25.279831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.801 [2024-07-14 02:21:25.280108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.801 [2024-07-14 02:21:25.280353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.801 [2024-07-14 02:21:25.280378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.801 [2024-07-14 02:21:25.280394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.801 [2024-07-14 02:21:25.283967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.801 [2024-07-14 02:21:25.293237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.801 [2024-07-14 02:21:25.293802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-14 02:21:25.293856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.801 [2024-07-14 02:21:25.293890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.801 [2024-07-14 02:21:25.294135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.801 [2024-07-14 02:21:25.294392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.801 [2024-07-14 02:21:25.294417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.801 [2024-07-14 02:21:25.294435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.801 [2024-07-14 02:21:25.298014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.801 [2024-07-14 02:21:25.307287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.801 [2024-07-14 02:21:25.307740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-14 02:21:25.307773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.801 [2024-07-14 02:21:25.307791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.801 [2024-07-14 02:21:25.308042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.801 [2024-07-14 02:21:25.308284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.801 [2024-07-14 02:21:25.308310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.801 [2024-07-14 02:21:25.308326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.801 [2024-07-14 02:21:25.311895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.801 [2024-07-14 02:21:25.321158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.801 [2024-07-14 02:21:25.321677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-14 02:21:25.321726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.801 [2024-07-14 02:21:25.321744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.801 [2024-07-14 02:21:25.321991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.801 [2024-07-14 02:21:25.322234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.801 [2024-07-14 02:21:25.322259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.801 [2024-07-14 02:21:25.322275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.801 [2024-07-14 02:21:25.325840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.801 [2024-07-14 02:21:25.335107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.801 [2024-07-14 02:21:25.335569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-14 02:21:25.335597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.801 [2024-07-14 02:21:25.335613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.801 [2024-07-14 02:21:25.335855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.801 [2024-07-14 02:21:25.336118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.801 [2024-07-14 02:21:25.336150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.801 [2024-07-14 02:21:25.336167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.801 [2024-07-14 02:21:25.339730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.801 [2024-07-14 02:21:25.349036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.801 [2024-07-14 02:21:25.349483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-14 02:21:25.349515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.802 [2024-07-14 02:21:25.349533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.802 [2024-07-14 02:21:25.349772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.802 [2024-07-14 02:21:25.350028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.802 [2024-07-14 02:21:25.350054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.802 [2024-07-14 02:21:25.350071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.802 [2024-07-14 02:21:25.353634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.802 [2024-07-14 02:21:25.362902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.802 [2024-07-14 02:21:25.363331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-14 02:21:25.363364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.802 [2024-07-14 02:21:25.363383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.802 [2024-07-14 02:21:25.363623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.802 [2024-07-14 02:21:25.363879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.802 [2024-07-14 02:21:25.363904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.802 [2024-07-14 02:21:25.363920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.802 [2024-07-14 02:21:25.367484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.802 [2024-07-14 02:21:25.376742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.802 [2024-07-14 02:21:25.377184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-14 02:21:25.377212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.802 [2024-07-14 02:21:25.377227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.802 [2024-07-14 02:21:25.377459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.802 [2024-07-14 02:21:25.377702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.802 [2024-07-14 02:21:25.377727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.802 [2024-07-14 02:21:25.377743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.802 [2024-07-14 02:21:25.381321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.802 [2024-07-14 02:21:25.390577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.802 [2024-07-14 02:21:25.391051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-14 02:21:25.391084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.802 [2024-07-14 02:21:25.391102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.802 [2024-07-14 02:21:25.391340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.802 [2024-07-14 02:21:25.391582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.802 [2024-07-14 02:21:25.391608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.802 [2024-07-14 02:21:25.391624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.802 [2024-07-14 02:21:25.395201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.802 [2024-07-14 02:21:25.404454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.802 [2024-07-14 02:21:25.404917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-14 02:21:25.404949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.802 [2024-07-14 02:21:25.404967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.802 [2024-07-14 02:21:25.405205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.802 [2024-07-14 02:21:25.405447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.802 [2024-07-14 02:21:25.405472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.802 [2024-07-14 02:21:25.405489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.802 [2024-07-14 02:21:25.409068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.802 [2024-07-14 02:21:25.418322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.802 [2024-07-14 02:21:25.418776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-14 02:21:25.418807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.802 [2024-07-14 02:21:25.418825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.802 [2024-07-14 02:21:25.419074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.802 [2024-07-14 02:21:25.419316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.802 [2024-07-14 02:21:25.419342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.802 [2024-07-14 02:21:25.419358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.802 [2024-07-14 02:21:25.422930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.802 [2024-07-14 02:21:25.432188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.802 [2024-07-14 02:21:25.432612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-14 02:21:25.432645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.802 [2024-07-14 02:21:25.432663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.802 [2024-07-14 02:21:25.432919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.802 [2024-07-14 02:21:25.433161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.802 [2024-07-14 02:21:25.433187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.802 [2024-07-14 02:21:25.433203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.802 [2024-07-14 02:21:25.436767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.802 [2024-07-14 02:21:25.446033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.802 [2024-07-14 02:21:25.446490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-14 02:21:25.446522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.802 [2024-07-14 02:21:25.446540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.802 [2024-07-14 02:21:25.446778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.802 [2024-07-14 02:21:25.447031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.802 [2024-07-14 02:21:25.447058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.802 [2024-07-14 02:21:25.447074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.802 [2024-07-14 02:21:25.450634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.802 [2024-07-14 02:21:25.459893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.802 [2024-07-14 02:21:25.460360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-14 02:21:25.460391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.802 [2024-07-14 02:21:25.460409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.802 [2024-07-14 02:21:25.460647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.802 [2024-07-14 02:21:25.460901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.802 [2024-07-14 02:21:25.460927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.802 [2024-07-14 02:21:25.460943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.802 [2024-07-14 02:21:25.464508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.802 [2024-07-14 02:21:25.473783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.802 [2024-07-14 02:21:25.474257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-14 02:21:25.474286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.802 [2024-07-14 02:21:25.474303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.802 [2024-07-14 02:21:25.474559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.802 [2024-07-14 02:21:25.474802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.802 [2024-07-14 02:21:25.474827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.802 [2024-07-14 02:21:25.474849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.802 [2024-07-14 02:21:25.478400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.802 [2024-07-14 02:21:25.487709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.802 [2024-07-14 02:21:25.488136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-14 02:21:25.488164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:19.802 [2024-07-14 02:21:25.488181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:19.802 [2024-07-14 02:21:25.488418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:19.802 [2024-07-14 02:21:25.488680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.802 [2024-07-14 02:21:25.488708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.802 [2024-07-14 02:21:25.488724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.061 [2024-07-14 02:21:25.492446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.061 [2024-07-14 02:21:25.501688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.061 [2024-07-14 02:21:25.502146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.061 [2024-07-14 02:21:25.502179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.061 [2024-07-14 02:21:25.502197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.061 [2024-07-14 02:21:25.502436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.061 [2024-07-14 02:21:25.502679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.061 [2024-07-14 02:21:25.502704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.061 [2024-07-14 02:21:25.502720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.061 [2024-07-14 02:21:25.506298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.061 [2024-07-14 02:21:25.515569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.061 [2024-07-14 02:21:25.516038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.061 [2024-07-14 02:21:25.516070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.061 [2024-07-14 02:21:25.516088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.062 [2024-07-14 02:21:25.516327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.062 [2024-07-14 02:21:25.516571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.062 [2024-07-14 02:21:25.516595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.062 [2024-07-14 02:21:25.516611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.062 [2024-07-14 02:21:25.520183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.062 [2024-07-14 02:21:25.529444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.062 [2024-07-14 02:21:25.529882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.062 [2024-07-14 02:21:25.529929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.062 [2024-07-14 02:21:25.529949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.062 [2024-07-14 02:21:25.530187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.062 [2024-07-14 02:21:25.530430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.062 [2024-07-14 02:21:25.530454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.062 [2024-07-14 02:21:25.530470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.062 [2024-07-14 02:21:25.534042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.062 [2024-07-14 02:21:25.543308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.062 [2024-07-14 02:21:25.543753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.062 [2024-07-14 02:21:25.543786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.062 [2024-07-14 02:21:25.543805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.062 [2024-07-14 02:21:25.544055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.062 [2024-07-14 02:21:25.544299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.062 [2024-07-14 02:21:25.544323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.062 [2024-07-14 02:21:25.544339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.062 [2024-07-14 02:21:25.547956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.062 [2024-07-14 02:21:25.557238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.062 [2024-07-14 02:21:25.557703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.062 [2024-07-14 02:21:25.557753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.062 [2024-07-14 02:21:25.557771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.062 [2024-07-14 02:21:25.558023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.062 [2024-07-14 02:21:25.558266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.062 [2024-07-14 02:21:25.558292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.062 [2024-07-14 02:21:25.558308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.062 [2024-07-14 02:21:25.562101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.062 [2024-07-14 02:21:25.571060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.062 [2024-07-14 02:21:25.571480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.062 [2024-07-14 02:21:25.571512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.062 [2024-07-14 02:21:25.571529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.062 [2024-07-14 02:21:25.571760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.062 [2024-07-14 02:21:25.572032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.062 [2024-07-14 02:21:25.572058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.062 [2024-07-14 02:21:25.572073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.062 [2024-07-14 02:21:25.575394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.062 [2024-07-14 02:21:25.584466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.062 [2024-07-14 02:21:25.584934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.062 [2024-07-14 02:21:25.584963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.062 [2024-07-14 02:21:25.584980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.062 [2024-07-14 02:21:25.585222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.062 [2024-07-14 02:21:25.585428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.062 [2024-07-14 02:21:25.585451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.062 [2024-07-14 02:21:25.585465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.062 [2024-07-14 02:21:25.588441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.062 [2024-07-14 02:21:25.597820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.062 [2024-07-14 02:21:25.598230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.062 [2024-07-14 02:21:25.598258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.062 [2024-07-14 02:21:25.598273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.062 [2024-07-14 02:21:25.598499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.062 [2024-07-14 02:21:25.598693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.062 [2024-07-14 02:21:25.598713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.062 [2024-07-14 02:21:25.598726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.062 [2024-07-14 02:21:25.601678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.062 [2024-07-14 02:21:25.611074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.062 [2024-07-14 02:21:25.611523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.062 [2024-07-14 02:21:25.611551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.062 [2024-07-14 02:21:25.611568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.062 [2024-07-14 02:21:25.611814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.062 [2024-07-14 02:21:25.612038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.062 [2024-07-14 02:21:25.612060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.062 [2024-07-14 02:21:25.612073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.062 [2024-07-14 02:21:25.615026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.062 [2024-07-14 02:21:25.624447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.062 [2024-07-14 02:21:25.624876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.062 [2024-07-14 02:21:25.624905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.062 [2024-07-14 02:21:25.624922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.062 [2024-07-14 02:21:25.625173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.062 [2024-07-14 02:21:25.625382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.062 [2024-07-14 02:21:25.625403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.062 [2024-07-14 02:21:25.625416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.062 [2024-07-14 02:21:25.628369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.062 [2024-07-14 02:21:25.637728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.062 [2024-07-14 02:21:25.638245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.062 [2024-07-14 02:21:25.638274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.062 [2024-07-14 02:21:25.638289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.062 [2024-07-14 02:21:25.638535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.062 [2024-07-14 02:21:25.638729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.062 [2024-07-14 02:21:25.638750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.062 [2024-07-14 02:21:25.638764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.062 [2024-07-14 02:21:25.641737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.062 [2024-07-14 02:21:25.650996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.062 [2024-07-14 02:21:25.651491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.062 [2024-07-14 02:21:25.651519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.062 [2024-07-14 02:21:25.651535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.062 [2024-07-14 02:21:25.651778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.062 [2024-07-14 02:21:25.652001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.062 [2024-07-14 02:21:25.652022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.062 [2024-07-14 02:21:25.652036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.062 [2024-07-14 02:21:25.654982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.062 [2024-07-14 02:21:25.664259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.062 [2024-07-14 02:21:25.664637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.062 [2024-07-14 02:21:25.664665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.062 [2024-07-14 02:21:25.664686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.062 [2024-07-14 02:21:25.664932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.062 [2024-07-14 02:21:25.665132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.062 [2024-07-14 02:21:25.665152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.062 [2024-07-14 02:21:25.665165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.062 [2024-07-14 02:21:25.668116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.062 [2024-07-14 02:21:25.677541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.062 [2024-07-14 02:21:25.677917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.062 [2024-07-14 02:21:25.677945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.062 [2024-07-14 02:21:25.677961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.062 [2024-07-14 02:21:25.678195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.062 [2024-07-14 02:21:25.678403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.062 [2024-07-14 02:21:25.678424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.062 [2024-07-14 02:21:25.678437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.062 [2024-07-14 02:21:25.681388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.062 [2024-07-14 02:21:25.690745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.062 [2024-07-14 02:21:25.691198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.062 [2024-07-14 02:21:25.691228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.062 [2024-07-14 02:21:25.691244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.062 [2024-07-14 02:21:25.691490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.062 [2024-07-14 02:21:25.691684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.062 [2024-07-14 02:21:25.691705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.062 [2024-07-14 02:21:25.691718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.062 [2024-07-14 02:21:25.694709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.062 [2024-07-14 02:21:25.704094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.062 [2024-07-14 02:21:25.704587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.062 [2024-07-14 02:21:25.704615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.062 [2024-07-14 02:21:25.704631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.062 [2024-07-14 02:21:25.704896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.062 [2024-07-14 02:21:25.705115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.062 [2024-07-14 02:21:25.705143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.062 [2024-07-14 02:21:25.705159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.062 [2024-07-14 02:21:25.708108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.062 [2024-07-14 02:21:25.717329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.062 [2024-07-14 02:21:25.717688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.062 [2024-07-14 02:21:25.717716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.062 [2024-07-14 02:21:25.717732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.062 [2024-07-14 02:21:25.717980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.062 [2024-07-14 02:21:25.718209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.062 [2024-07-14 02:21:25.718230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.062 [2024-07-14 02:21:25.718243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.062 [2024-07-14 02:21:25.721189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.063 [2024-07-14 02:21:25.730622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.063 [2024-07-14 02:21:25.731093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.063 [2024-07-14 02:21:25.731123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.063 [2024-07-14 02:21:25.731139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.063 [2024-07-14 02:21:25.731389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.063 [2024-07-14 02:21:25.731582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.063 [2024-07-14 02:21:25.731603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.063 [2024-07-14 02:21:25.731616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.063 [2024-07-14 02:21:25.734564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.063 [2024-07-14 02:21:25.743948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.063 [2024-07-14 02:21:25.744418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.063 [2024-07-14 02:21:25.744446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.063 [2024-07-14 02:21:25.744463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.063 [2024-07-14 02:21:25.744697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.063 [2024-07-14 02:21:25.744937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.063 [2024-07-14 02:21:25.744960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.063 [2024-07-14 02:21:25.744974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.063 [2024-07-14 02:21:25.748038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.323 [2024-07-14 02:21:25.757330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.323 [2024-07-14 02:21:25.757778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.323 [2024-07-14 02:21:25.757816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.323 [2024-07-14 02:21:25.757840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.323 [2024-07-14 02:21:25.758081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.323 [2024-07-14 02:21:25.758294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.323 [2024-07-14 02:21:25.758316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.323 [2024-07-14 02:21:25.758329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.323 [2024-07-14 02:21:25.761279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.323 [2024-07-14 02:21:25.770645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.323 [2024-07-14 02:21:25.771093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.323 [2024-07-14 02:21:25.771123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.323 [2024-07-14 02:21:25.771139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.323 [2024-07-14 02:21:25.771392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.323 [2024-07-14 02:21:25.771600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.323 [2024-07-14 02:21:25.771622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.323 [2024-07-14 02:21:25.771635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.323 [2024-07-14 02:21:25.774640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.323 [2024-07-14 02:21:25.783835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.323 [2024-07-14 02:21:25.784279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.323 [2024-07-14 02:21:25.784308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.323 [2024-07-14 02:21:25.784325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.323 [2024-07-14 02:21:25.784573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.323 [2024-07-14 02:21:25.784766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.323 [2024-07-14 02:21:25.784787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.323 [2024-07-14 02:21:25.784800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.323 [2024-07-14 02:21:25.787789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.323 [2024-07-14 02:21:25.797028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.323 [2024-07-14 02:21:25.797411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.323 [2024-07-14 02:21:25.797439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.323 [2024-07-14 02:21:25.797455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.323 [2024-07-14 02:21:25.797690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.323 [2024-07-14 02:21:25.797912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.323 [2024-07-14 02:21:25.797934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.323 [2024-07-14 02:21:25.797947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.323 [2024-07-14 02:21:25.801008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.323 [2024-07-14 02:21:25.810407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.323 [2024-07-14 02:21:25.810828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.323 [2024-07-14 02:21:25.810857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.323 [2024-07-14 02:21:25.810882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.323 [2024-07-14 02:21:25.811125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.323 [2024-07-14 02:21:25.811352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.323 [2024-07-14 02:21:25.811374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.323 [2024-07-14 02:21:25.811388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.323 [2024-07-14 02:21:25.814336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.323 [2024-07-14 02:21:25.823711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.323 [2024-07-14 02:21:25.824190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.323 [2024-07-14 02:21:25.824218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.323 [2024-07-14 02:21:25.824234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.323 [2024-07-14 02:21:25.824479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.323 [2024-07-14 02:21:25.824673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.323 [2024-07-14 02:21:25.824694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.323 [2024-07-14 02:21:25.824708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.323 [2024-07-14 02:21:25.827660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.323 [2024-07-14 02:21:25.837050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.323 [2024-07-14 02:21:25.837479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.323 [2024-07-14 02:21:25.837508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.323 [2024-07-14 02:21:25.837524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.323 [2024-07-14 02:21:25.837773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.323 [2024-07-14 02:21:25.838016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.323 [2024-07-14 02:21:25.838054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.323 [2024-07-14 02:21:25.838078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.323 [2024-07-14 02:21:25.841193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.323 [2024-07-14 02:21:25.850500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.323 [2024-07-14 02:21:25.850927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.323 [2024-07-14 02:21:25.850957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.323 [2024-07-14 02:21:25.850975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.323 [2024-07-14 02:21:25.851215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.323 [2024-07-14 02:21:25.851431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.323 [2024-07-14 02:21:25.851451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.323 [2024-07-14 02:21:25.851464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.323 [2024-07-14 02:21:25.854533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.323 [2024-07-14 02:21:25.863936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.323 [2024-07-14 02:21:25.864392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.323 [2024-07-14 02:21:25.864421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.323 [2024-07-14 02:21:25.864438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.323 [2024-07-14 02:21:25.864688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.323 [2024-07-14 02:21:25.864923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.323 [2024-07-14 02:21:25.864947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.323 [2024-07-14 02:21:25.864963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.324 [2024-07-14 02:21:25.868108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.324 [2024-07-14 02:21:25.877415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.324 [2024-07-14 02:21:25.877923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.324 [2024-07-14 02:21:25.877953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.324 [2024-07-14 02:21:25.877968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.324 [2024-07-14 02:21:25.878198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.324 [2024-07-14 02:21:25.878446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.324 [2024-07-14 02:21:25.878467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.324 [2024-07-14 02:21:25.878480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.324 [2024-07-14 02:21:25.881662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.324 [2024-07-14 02:21:25.890694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.324 [2024-07-14 02:21:25.891122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.324 [2024-07-14 02:21:25.891151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.324 [2024-07-14 02:21:25.891167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.324 [2024-07-14 02:21:25.891430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.324 [2024-07-14 02:21:25.891623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.324 [2024-07-14 02:21:25.891643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.324 [2024-07-14 02:21:25.891656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.324 [2024-07-14 02:21:25.894700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.324 [2024-07-14 02:21:25.903978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.324 [2024-07-14 02:21:25.904382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.324 [2024-07-14 02:21:25.904409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.324 [2024-07-14 02:21:25.904424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.324 [2024-07-14 02:21:25.904644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.324 [2024-07-14 02:21:25.904879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.324 [2024-07-14 02:21:25.904913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.324 [2024-07-14 02:21:25.904927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.324 [2024-07-14 02:21:25.907963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.324 [2024-07-14 02:21:25.917277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.324 [2024-07-14 02:21:25.917753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.324 [2024-07-14 02:21:25.917782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.324 [2024-07-14 02:21:25.917798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.324 [2024-07-14 02:21:25.918061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.324 [2024-07-14 02:21:25.918274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.324 [2024-07-14 02:21:25.918294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.324 [2024-07-14 02:21:25.918307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.324 [2024-07-14 02:21:25.921258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.324 [2024-07-14 02:21:25.930517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.324 [2024-07-14 02:21:25.930920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.324 [2024-07-14 02:21:25.930949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.324 [2024-07-14 02:21:25.930966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.324 [2024-07-14 02:21:25.931226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.324 [2024-07-14 02:21:25.931420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.324 [2024-07-14 02:21:25.931440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.324 [2024-07-14 02:21:25.931453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.324 [2024-07-14 02:21:25.934411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.324 [2024-07-14 02:21:25.943800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.324 [2024-07-14 02:21:25.944279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.324 [2024-07-14 02:21:25.944306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.324 [2024-07-14 02:21:25.944321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.324 [2024-07-14 02:21:25.944549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.324 [2024-07-14 02:21:25.944743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.324 [2024-07-14 02:21:25.944763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.324 [2024-07-14 02:21:25.944776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.324 [2024-07-14 02:21:25.947730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.324 [2024-07-14 02:21:25.957020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.324 [2024-07-14 02:21:25.957449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.324 [2024-07-14 02:21:25.957477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.324 [2024-07-14 02:21:25.957492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.324 [2024-07-14 02:21:25.957739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.324 [2024-07-14 02:21:25.957959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.324 [2024-07-14 02:21:25.957980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.324 [2024-07-14 02:21:25.957993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.324 [2024-07-14 02:21:25.960945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.324 [2024-07-14 02:21:25.970234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.324 [2024-07-14 02:21:25.970724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.324 [2024-07-14 02:21:25.970753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.324 [2024-07-14 02:21:25.970769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.324 [2024-07-14 02:21:25.971038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.324 [2024-07-14 02:21:25.971252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.324 [2024-07-14 02:21:25.971273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.324 [2024-07-14 02:21:25.971290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.324 [2024-07-14 02:21:25.974283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.324 [2024-07-14 02:21:25.983516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.324 [2024-07-14 02:21:25.983931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.324 [2024-07-14 02:21:25.983960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.324 [2024-07-14 02:21:25.983976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.324 [2024-07-14 02:21:25.984229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.324 [2024-07-14 02:21:25.984435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.324 [2024-07-14 02:21:25.984456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.324 [2024-07-14 02:21:25.984469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.324 [2024-07-14 02:21:25.987422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.324 [2024-07-14 02:21:25.996803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.324 [2024-07-14 02:21:25.997317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.324 [2024-07-14 02:21:25.997346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.324 [2024-07-14 02:21:25.997363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.324 [2024-07-14 02:21:25.997612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.324 [2024-07-14 02:21:25.997807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.324 [2024-07-14 02:21:25.997828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.324 [2024-07-14 02:21:25.997841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.324 [2024-07-14 02:21:26.000793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.324 [2024-07-14 02:21:26.010148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.324 [2024-07-14 02:21:26.010647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.324 [2024-07-14 02:21:26.010675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.324 [2024-07-14 02:21:26.010697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.324 [2024-07-14 02:21:26.011009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.324 [2024-07-14 02:21:26.011217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.324 [2024-07-14 02:21:26.011239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.324 [2024-07-14 02:21:26.011252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.584 [2024-07-14 02:21:26.014392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.584 [2024-07-14 02:21:26.023349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.584 [2024-07-14 02:21:26.023778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-14 02:21:26.023813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.584 [2024-07-14 02:21:26.023830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.584 [2024-07-14 02:21:26.024082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.585 [2024-07-14 02:21:26.024316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.585 [2024-07-14 02:21:26.024338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.585 [2024-07-14 02:21:26.024352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.585 [2024-07-14 02:21:26.027324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.585 [2024-07-14 02:21:26.036651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.585 [2024-07-14 02:21:26.037095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-14 02:21:26.037125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.585 [2024-07-14 02:21:26.037156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.585 [2024-07-14 02:21:26.037384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.585 [2024-07-14 02:21:26.037578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.585 [2024-07-14 02:21:26.037598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.585 [2024-07-14 02:21:26.037611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.585 [2024-07-14 02:21:26.040533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.585 [2024-07-14 02:21:26.049902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.585 [2024-07-14 02:21:26.050297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-14 02:21:26.050324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.585 [2024-07-14 02:21:26.050340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.585 [2024-07-14 02:21:26.050567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.585 [2024-07-14 02:21:26.050761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.585 [2024-07-14 02:21:26.050781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.585 [2024-07-14 02:21:26.050794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.585 [2024-07-14 02:21:26.053847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.585 [2024-07-14 02:21:26.063252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.585 [2024-07-14 02:21:26.063657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-14 02:21:26.063685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.585 [2024-07-14 02:21:26.063701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.585 [2024-07-14 02:21:26.063945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.585 [2024-07-14 02:21:26.064150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.585 [2024-07-14 02:21:26.064171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.585 [2024-07-14 02:21:26.064200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.585 [2024-07-14 02:21:26.067130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.585 [2024-07-14 02:21:26.076558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.585 [2024-07-14 02:21:26.076985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-14 02:21:26.077013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.585 [2024-07-14 02:21:26.077029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.585 [2024-07-14 02:21:26.077259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.585 [2024-07-14 02:21:26.077453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.585 [2024-07-14 02:21:26.077474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.585 [2024-07-14 02:21:26.077486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.585 [2024-07-14 02:21:26.080439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.585 [2024-07-14 02:21:26.089800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.585 [2024-07-14 02:21:26.090252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-14 02:21:26.090279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.585 [2024-07-14 02:21:26.090295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.585 [2024-07-14 02:21:26.090508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.585 [2024-07-14 02:21:26.090716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.585 [2024-07-14 02:21:26.090736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.585 [2024-07-14 02:21:26.090749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.585 [2024-07-14 02:21:26.093663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.585 [2024-07-14 02:21:26.103045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.585 [2024-07-14 02:21:26.103509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-14 02:21:26.103537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.585 [2024-07-14 02:21:26.103553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.585 [2024-07-14 02:21:26.103787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.585 [2024-07-14 02:21:26.104025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.585 [2024-07-14 02:21:26.104046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.585 [2024-07-14 02:21:26.104059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.585 [2024-07-14 02:21:26.107010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.585 [2024-07-14 02:21:26.116316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.585 [2024-07-14 02:21:26.116794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-14 02:21:26.116823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.585 [2024-07-14 02:21:26.116839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.585 [2024-07-14 02:21:26.117101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.585 [2024-07-14 02:21:26.117311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.585 [2024-07-14 02:21:26.117333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.585 [2024-07-14 02:21:26.117345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.585 [2024-07-14 02:21:26.120291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.585 [2024-07-14 02:21:26.129409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.585 [2024-07-14 02:21:26.129834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-14 02:21:26.129862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.585 [2024-07-14 02:21:26.129889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.585 [2024-07-14 02:21:26.130142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.585 [2024-07-14 02:21:26.130351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.585 [2024-07-14 02:21:26.130372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.585 [2024-07-14 02:21:26.130386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.585 [2024-07-14 02:21:26.133335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.585 [2024-07-14 02:21:26.142706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.585 [2024-07-14 02:21:26.143144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-14 02:21:26.143174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.585 [2024-07-14 02:21:26.143190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.585 [2024-07-14 02:21:26.143440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.585 [2024-07-14 02:21:26.143632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.585 [2024-07-14 02:21:26.143653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.585 [2024-07-14 02:21:26.143665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.585 [2024-07-14 02:21:26.146613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.585 [2024-07-14 02:21:26.156046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.585 [2024-07-14 02:21:26.156425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-14 02:21:26.156453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.585 [2024-07-14 02:21:26.156473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.585 [2024-07-14 02:21:26.156702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.585 [2024-07-14 02:21:26.156923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.585 [2024-07-14 02:21:26.156944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.585 [2024-07-14 02:21:26.156958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.585 [2024-07-14 02:21:26.159904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.585 [2024-07-14 02:21:26.169288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.585 [2024-07-14 02:21:26.169700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-14 02:21:26.169728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.585 [2024-07-14 02:21:26.169743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.586 [2024-07-14 02:21:26.169987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.586 [2024-07-14 02:21:26.170202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.586 [2024-07-14 02:21:26.170224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.586 [2024-07-14 02:21:26.170238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.586 [2024-07-14 02:21:26.173334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.586 [2024-07-14 02:21:26.182531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.586 [2024-07-14 02:21:26.182914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-14 02:21:26.182942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.586 [2024-07-14 02:21:26.182958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.586 [2024-07-14 02:21:26.183194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.586 [2024-07-14 02:21:26.183403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.586 [2024-07-14 02:21:26.183425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.586 [2024-07-14 02:21:26.183438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.586 [2024-07-14 02:21:26.186389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.586 [2024-07-14 02:21:26.195750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.586 [2024-07-14 02:21:26.196207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-14 02:21:26.196235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.586 [2024-07-14 02:21:26.196251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.586 [2024-07-14 02:21:26.196487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.586 [2024-07-14 02:21:26.196696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.586 [2024-07-14 02:21:26.196721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.586 [2024-07-14 02:21:26.196736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.586 [2024-07-14 02:21:26.199690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.586 [2024-07-14 02:21:26.209073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.586 [2024-07-14 02:21:26.209474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-14 02:21:26.209503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.586 [2024-07-14 02:21:26.209519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.586 [2024-07-14 02:21:26.209749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.586 [2024-07-14 02:21:26.209973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.586 [2024-07-14 02:21:26.209995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.586 [2024-07-14 02:21:26.210009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.586 [2024-07-14 02:21:26.212967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.586 [2024-07-14 02:21:26.222350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.586 [2024-07-14 02:21:26.222763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-14 02:21:26.222791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.586 [2024-07-14 02:21:26.222806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.586 [2024-07-14 02:21:26.223067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.586 [2024-07-14 02:21:26.223278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.586 [2024-07-14 02:21:26.223299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.586 [2024-07-14 02:21:26.223312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.586 [2024-07-14 02:21:26.226298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.586 [2024-07-14 02:21:26.235657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.586 [2024-07-14 02:21:26.236088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-14 02:21:26.236117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.586 [2024-07-14 02:21:26.236133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.586 [2024-07-14 02:21:26.236364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.586 [2024-07-14 02:21:26.236573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.586 [2024-07-14 02:21:26.236594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.586 [2024-07-14 02:21:26.236608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.586 [2024-07-14 02:21:26.239560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.586 [2024-07-14 02:21:26.248957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.586 [2024-07-14 02:21:26.249420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-14 02:21:26.249448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.586 [2024-07-14 02:21:26.249464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.586 [2024-07-14 02:21:26.249698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.586 [2024-07-14 02:21:26.249954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.586 [2024-07-14 02:21:26.249977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.586 [2024-07-14 02:21:26.249991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.586 [2024-07-14 02:21:26.252955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.586 [2024-07-14 02:21:26.262153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.586 [2024-07-14 02:21:26.262592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-14 02:21:26.262619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.586 [2024-07-14 02:21:26.262636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.586 [2024-07-14 02:21:26.262852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.586 [2024-07-14 02:21:26.263061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.586 [2024-07-14 02:21:26.263083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.586 [2024-07-14 02:21:26.263097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.586 [2024-07-14 02:21:26.266046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.848 [2024-07-14 02:21:26.275706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.848 [2024-07-14 02:21:26.276125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.848 [2024-07-14 02:21:26.276164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.848 [2024-07-14 02:21:26.276199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.848 [2024-07-14 02:21:26.276429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.848 [2024-07-14 02:21:26.276624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.848 [2024-07-14 02:21:26.276644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.848 [2024-07-14 02:21:26.276657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.848 [2024-07-14 02:21:26.279670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.848 [2024-07-14 02:21:26.289052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.848 [2024-07-14 02:21:26.289489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.848 [2024-07-14 02:21:26.289518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.848 [2024-07-14 02:21:26.289535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.848 [2024-07-14 02:21:26.289790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.848 [2024-07-14 02:21:26.290016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.848 [2024-07-14 02:21:26.290038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.848 [2024-07-14 02:21:26.290052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.848 [2024-07-14 02:21:26.293022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.848 [2024-07-14 02:21:26.302240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.848 [2024-07-14 02:21:26.302683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.848 [2024-07-14 02:21:26.302711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.848 [2024-07-14 02:21:26.302726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.848 [2024-07-14 02:21:26.302955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.848 [2024-07-14 02:21:26.303162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.848 [2024-07-14 02:21:26.303197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.848 [2024-07-14 02:21:26.303211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.848 [2024-07-14 02:21:26.306247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.848 [2024-07-14 02:21:26.315532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.848 [2024-07-14 02:21:26.316014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.848 [2024-07-14 02:21:26.316043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.848 [2024-07-14 02:21:26.316059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.848 [2024-07-14 02:21:26.316309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.848 [2024-07-14 02:21:26.316502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.848 [2024-07-14 02:21:26.316523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.848 [2024-07-14 02:21:26.316537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.848 [2024-07-14 02:21:26.319490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.848 [2024-07-14 02:21:26.328741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.848 [2024-07-14 02:21:26.329138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.848 [2024-07-14 02:21:26.329167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.848 [2024-07-14 02:21:26.329183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.848 [2024-07-14 02:21:26.329417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.848 [2024-07-14 02:21:26.329628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.848 [2024-07-14 02:21:26.329649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.848 [2024-07-14 02:21:26.329666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.848 [2024-07-14 02:21:26.332642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.848 [2024-07-14 02:21:26.342054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.848 [2024-07-14 02:21:26.342496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.848 [2024-07-14 02:21:26.342525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.848 [2024-07-14 02:21:26.342541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.848 [2024-07-14 02:21:26.342790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.848 [2024-07-14 02:21:26.343012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.848 [2024-07-14 02:21:26.343034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.848 [2024-07-14 02:21:26.343047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.848 [2024-07-14 02:21:26.345998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.848 [2024-07-14 02:21:26.355260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.848 [2024-07-14 02:21:26.355676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.848 [2024-07-14 02:21:26.355704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.848 [2024-07-14 02:21:26.355720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.848 [2024-07-14 02:21:26.355981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.848 [2024-07-14 02:21:26.356196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.848 [2024-07-14 02:21:26.356216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.848 [2024-07-14 02:21:26.356229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.848 [2024-07-14 02:21:26.359180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.848 [2024-07-14 02:21:26.368548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.848 [2024-07-14 02:21:26.368965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.848 [2024-07-14 02:21:26.368994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.849 [2024-07-14 02:21:26.369011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.849 [2024-07-14 02:21:26.369261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.849 [2024-07-14 02:21:26.369454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.849 [2024-07-14 02:21:26.369474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.849 [2024-07-14 02:21:26.369487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.849 [2024-07-14 02:21:26.372442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.849 [2024-07-14 02:21:26.381914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.849 [2024-07-14 02:21:26.382412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.849 [2024-07-14 02:21:26.382441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.849 [2024-07-14 02:21:26.382458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.849 [2024-07-14 02:21:26.382712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.849 [2024-07-14 02:21:26.382931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.849 [2024-07-14 02:21:26.382953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.849 [2024-07-14 02:21:26.382966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.849 [2024-07-14 02:21:26.385914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.849 [2024-07-14 02:21:26.395126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.849 [2024-07-14 02:21:26.395555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.849 [2024-07-14 02:21:26.395583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.849 [2024-07-14 02:21:26.395599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.849 [2024-07-14 02:21:26.395844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.849 [2024-07-14 02:21:26.396066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.849 [2024-07-14 02:21:26.396088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.849 [2024-07-14 02:21:26.396101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.849 [2024-07-14 02:21:26.399051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.849 [2024-07-14 02:21:26.408451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.849 [2024-07-14 02:21:26.408843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.849 [2024-07-14 02:21:26.408894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.849 [2024-07-14 02:21:26.408911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.849 [2024-07-14 02:21:26.409164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.849 [2024-07-14 02:21:26.409374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.849 [2024-07-14 02:21:26.409394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.849 [2024-07-14 02:21:26.409407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.849 [2024-07-14 02:21:26.412361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.849 [2024-07-14 02:21:26.421729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.849 [2024-07-14 02:21:26.422194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.849 [2024-07-14 02:21:26.422223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.849 [2024-07-14 02:21:26.422238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.849 [2024-07-14 02:21:26.422489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.849 [2024-07-14 02:21:26.422682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.849 [2024-07-14 02:21:26.422703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.849 [2024-07-14 02:21:26.422717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.849 [2024-07-14 02:21:26.425708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.849 [2024-07-14 02:21:26.434939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.849 [2024-07-14 02:21:26.435394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.849 [2024-07-14 02:21:26.435423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.849 [2024-07-14 02:21:26.435438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.849 [2024-07-14 02:21:26.435667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.849 [2024-07-14 02:21:26.435886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.849 [2024-07-14 02:21:26.435908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.849 [2024-07-14 02:21:26.435929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.849 [2024-07-14 02:21:26.438881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.849 [2024-07-14 02:21:26.448271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.849 [2024-07-14 02:21:26.448633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.849 [2024-07-14 02:21:26.448661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.849 [2024-07-14 02:21:26.448676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.849 [2024-07-14 02:21:26.448917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.849 [2024-07-14 02:21:26.449132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.849 [2024-07-14 02:21:26.449152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.849 [2024-07-14 02:21:26.449164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.849 [2024-07-14 02:21:26.452154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.849 [2024-07-14 02:21:26.461533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.849 [2024-07-14 02:21:26.462009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.849 [2024-07-14 02:21:26.462037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.849 [2024-07-14 02:21:26.462053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.849 [2024-07-14 02:21:26.462297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.849 [2024-07-14 02:21:26.462490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.849 [2024-07-14 02:21:26.462511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.849 [2024-07-14 02:21:26.462525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.849 [2024-07-14 02:21:26.465479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.849 [2024-07-14 02:21:26.474859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.849 [2024-07-14 02:21:26.475290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.849 [2024-07-14 02:21:26.475318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.849 [2024-07-14 02:21:26.475333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.849 [2024-07-14 02:21:26.475583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.849 [2024-07-14 02:21:26.475776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.849 [2024-07-14 02:21:26.475797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.849 [2024-07-14 02:21:26.475810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.849 [2024-07-14 02:21:26.478762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.849 [2024-07-14 02:21:26.488155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.849 [2024-07-14 02:21:26.488644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.849 [2024-07-14 02:21:26.488673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.849 [2024-07-14 02:21:26.488690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.849 [2024-07-14 02:21:26.488923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.849 [2024-07-14 02:21:26.489128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.849 [2024-07-14 02:21:26.489163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.849 [2024-07-14 02:21:26.489177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.849 [2024-07-14 02:21:26.492124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.849 [2024-07-14 02:21:26.501341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.849 [2024-07-14 02:21:26.501816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.849 [2024-07-14 02:21:26.501844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.849 [2024-07-14 02:21:26.501886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.849 [2024-07-14 02:21:26.502137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.849 [2024-07-14 02:21:26.502347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.849 [2024-07-14 02:21:26.502368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.849 [2024-07-14 02:21:26.502381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.849 [2024-07-14 02:21:26.505327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.849 [2024-07-14 02:21:26.514523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.849 [2024-07-14 02:21:26.515003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.850 [2024-07-14 02:21:26.515039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.850 [2024-07-14 02:21:26.515056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.850 [2024-07-14 02:21:26.515305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.850 [2024-07-14 02:21:26.515498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.850 [2024-07-14 02:21:26.515519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.850 [2024-07-14 02:21:26.515532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.850 [2024-07-14 02:21:26.518483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.850 [2024-07-14 02:21:26.527896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.850 [2024-07-14 02:21:26.528292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.850 [2024-07-14 02:21:26.528320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:20.850 [2024-07-14 02:21:26.528336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:20.850 [2024-07-14 02:21:26.528586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:20.850 [2024-07-14 02:21:26.528779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.850 [2024-07-14 02:21:26.528799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.850 [2024-07-14 02:21:26.528813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.850 [2024-07-14 02:21:26.531764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.112 [2024-07-14 02:21:26.541294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.112 [2024-07-14 02:21:26.541784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.112 [2024-07-14 02:21:26.541812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.112 [2024-07-14 02:21:26.541830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.112 [2024-07-14 02:21:26.542106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.112 [2024-07-14 02:21:26.542315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.112 [2024-07-14 02:21:26.542335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.112 [2024-07-14 02:21:26.542348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.112 [2024-07-14 02:21:26.545479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.112 [2024-07-14 02:21:26.554559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.112 [2024-07-14 02:21:26.554982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.112 [2024-07-14 02:21:26.555011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.112 [2024-07-14 02:21:26.555039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.112 [2024-07-14 02:21:26.555288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.112 [2024-07-14 02:21:26.555487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.112 [2024-07-14 02:21:26.555508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.112 [2024-07-14 02:21:26.555521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.112 [2024-07-14 02:21:26.558673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.112 [2024-07-14 02:21:26.567978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.112 [2024-07-14 02:21:26.568454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.112 [2024-07-14 02:21:26.568483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.112 [2024-07-14 02:21:26.568500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.112 [2024-07-14 02:21:26.568753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.112 [2024-07-14 02:21:26.568991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.112 [2024-07-14 02:21:26.569013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.112 [2024-07-14 02:21:26.569026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.112 [2024-07-14 02:21:26.571976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.112 [2024-07-14 02:21:26.581233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.112 [2024-07-14 02:21:26.581711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.112 [2024-07-14 02:21:26.581740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.112 [2024-07-14 02:21:26.581755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.112 [2024-07-14 02:21:26.582019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.112 [2024-07-14 02:21:26.582231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.112 [2024-07-14 02:21:26.582253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.112 [2024-07-14 02:21:26.582265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.112 [2024-07-14 02:21:26.585440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.112 [2024-07-14 02:21:26.594516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.112 [2024-07-14 02:21:26.594904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.112 [2024-07-14 02:21:26.594933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.112 [2024-07-14 02:21:26.594948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.112 [2024-07-14 02:21:26.595176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.112 [2024-07-14 02:21:26.595369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.112 [2024-07-14 02:21:26.595390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.112 [2024-07-14 02:21:26.595403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.112 [2024-07-14 02:21:26.598361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.112 [2024-07-14 02:21:26.607727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.112 [2024-07-14 02:21:26.608167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.112 [2024-07-14 02:21:26.608210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.112 [2024-07-14 02:21:26.608225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.112 [2024-07-14 02:21:26.608470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.112 [2024-07-14 02:21:26.608663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.112 [2024-07-14 02:21:26.608684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.112 [2024-07-14 02:21:26.608697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.112 [2024-07-14 02:21:26.611647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.112 [2024-07-14 02:21:26.620998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.112 [2024-07-14 02:21:26.621380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.112 [2024-07-14 02:21:26.621408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.112 [2024-07-14 02:21:26.621423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.112 [2024-07-14 02:21:26.621640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.112 [2024-07-14 02:21:26.621863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.112 [2024-07-14 02:21:26.621893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.112 [2024-07-14 02:21:26.621907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.112 [2024-07-14 02:21:26.624832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.112 [2024-07-14 02:21:26.634278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.112 [2024-07-14 02:21:26.634754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.112 [2024-07-14 02:21:26.634782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.112 [2024-07-14 02:21:26.634797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.112 [2024-07-14 02:21:26.635058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.112 [2024-07-14 02:21:26.635270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.112 [2024-07-14 02:21:26.635295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.112 [2024-07-14 02:21:26.635308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.112 [2024-07-14 02:21:26.638257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.112 [2024-07-14 02:21:26.647456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.112 [2024-07-14 02:21:26.647917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.113 [2024-07-14 02:21:26.647948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.113 [2024-07-14 02:21:26.647968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.113 [2024-07-14 02:21:26.648202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.113 [2024-07-14 02:21:26.648410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.113 [2024-07-14 02:21:26.648431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.113 [2024-07-14 02:21:26.648445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.113 [2024-07-14 02:21:26.651436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.113 [2024-07-14 02:21:26.660637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.113 [2024-07-14 02:21:26.661071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.113 [2024-07-14 02:21:26.661099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.113 [2024-07-14 02:21:26.661116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.113 [2024-07-14 02:21:26.661361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.113 [2024-07-14 02:21:26.661554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.113 [2024-07-14 02:21:26.661575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.113 [2024-07-14 02:21:26.661587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.113 [2024-07-14 02:21:26.664539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.113 [2024-07-14 02:21:26.673933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.113 [2024-07-14 02:21:26.674360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.113 [2024-07-14 02:21:26.674388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.113 [2024-07-14 02:21:26.674403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.113 [2024-07-14 02:21:26.674638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.113 [2024-07-14 02:21:26.674861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.113 [2024-07-14 02:21:26.674892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.113 [2024-07-14 02:21:26.674906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.113 [2024-07-14 02:21:26.677856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.113 [2024-07-14 02:21:26.687422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.113 [2024-07-14 02:21:26.687873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.113 [2024-07-14 02:21:26.687901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.113 [2024-07-14 02:21:26.687917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.113 [2024-07-14 02:21:26.688153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.113 [2024-07-14 02:21:26.688364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.113 [2024-07-14 02:21:26.688389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.113 [2024-07-14 02:21:26.688403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.113 [2024-07-14 02:21:26.691276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.113 [2024-07-14 02:21:26.700637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.113 [2024-07-14 02:21:26.701086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.113 [2024-07-14 02:21:26.701115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.113 [2024-07-14 02:21:26.701131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.113 [2024-07-14 02:21:26.701393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.113 [2024-07-14 02:21:26.701586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.113 [2024-07-14 02:21:26.701606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.113 [2024-07-14 02:21:26.701619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.113 [2024-07-14 02:21:26.704571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.113 [2024-07-14 02:21:26.714591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.113 [2024-07-14 02:21:26.715041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.113 [2024-07-14 02:21:26.715073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.113 [2024-07-14 02:21:26.715091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.113 [2024-07-14 02:21:26.715330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.113 [2024-07-14 02:21:26.715572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.113 [2024-07-14 02:21:26.715598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.113 [2024-07-14 02:21:26.715614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.113 [2024-07-14 02:21:26.719196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.113 [2024-07-14 02:21:26.728465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.113 [2024-07-14 02:21:26.728939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.113 [2024-07-14 02:21:26.728972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.113 [2024-07-14 02:21:26.728990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.113 [2024-07-14 02:21:26.729228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.113 [2024-07-14 02:21:26.729470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.113 [2024-07-14 02:21:26.729495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.113 [2024-07-14 02:21:26.729511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.113 [2024-07-14 02:21:26.733083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.113 [2024-07-14 02:21:26.742342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.113 [2024-07-14 02:21:26.742807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.113 [2024-07-14 02:21:26.742840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.113 [2024-07-14 02:21:26.742860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.113 [2024-07-14 02:21:26.743113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.113 [2024-07-14 02:21:26.743356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.113 [2024-07-14 02:21:26.743381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.113 [2024-07-14 02:21:26.743398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.113 [2024-07-14 02:21:26.746974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.113 [2024-07-14 02:21:26.756233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.113 [2024-07-14 02:21:26.756717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.113 [2024-07-14 02:21:26.756745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.113 [2024-07-14 02:21:26.756761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.113 [2024-07-14 02:21:26.757026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.113 [2024-07-14 02:21:26.757269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.113 [2024-07-14 02:21:26.757295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.113 [2024-07-14 02:21:26.757311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.113 [2024-07-14 02:21:26.760883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.113 [2024-07-14 02:21:26.770138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.113 [2024-07-14 02:21:26.770688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.113 [2024-07-14 02:21:26.770737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.113 [2024-07-14 02:21:26.770755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.113 [2024-07-14 02:21:26.771007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.113 [2024-07-14 02:21:26.771250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.113 [2024-07-14 02:21:26.771276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.113 [2024-07-14 02:21:26.771293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.113 [2024-07-14 02:21:26.774874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.113 [2024-07-14 02:21:26.784132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.113 [2024-07-14 02:21:26.784635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.113 [2024-07-14 02:21:26.784685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.113 [2024-07-14 02:21:26.784703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.113 [2024-07-14 02:21:26.784960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.113 [2024-07-14 02:21:26.785203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.113 [2024-07-14 02:21:26.785229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.113 [2024-07-14 02:21:26.785245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.113 [2024-07-14 02:21:26.788809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.113 [2024-07-14 02:21:26.798194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.114 [2024-07-14 02:21:26.798803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.114 [2024-07-14 02:21:26.798856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.114 [2024-07-14 02:21:26.798886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.114 [2024-07-14 02:21:26.799127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.114 [2024-07-14 02:21:26.799369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.114 [2024-07-14 02:21:26.799395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.114 [2024-07-14 02:21:26.799411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.374 [2024-07-14 02:21:26.803095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.374 [2024-07-14 02:21:26.812295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.374 [2024-07-14 02:21:26.812776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.374 [2024-07-14 02:21:26.812805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.374 [2024-07-14 02:21:26.812821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.374 [2024-07-14 02:21:26.813101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.374 [2024-07-14 02:21:26.813344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.374 [2024-07-14 02:21:26.813370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.374 [2024-07-14 02:21:26.813387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.374 [2024-07-14 02:21:26.816965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.374 [2024-07-14 02:21:26.826235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.374 [2024-07-14 02:21:26.826692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.374 [2024-07-14 02:21:26.826724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.374 [2024-07-14 02:21:26.826742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.374 [2024-07-14 02:21:26.826992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.374 [2024-07-14 02:21:26.827235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.375 [2024-07-14 02:21:26.827260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.375 [2024-07-14 02:21:26.827284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.375 [2024-07-14 02:21:26.830854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.375 [2024-07-14 02:21:26.840126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.375 [2024-07-14 02:21:26.840583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.375 [2024-07-14 02:21:26.840615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.375 [2024-07-14 02:21:26.840633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.375 [2024-07-14 02:21:26.840886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.375 [2024-07-14 02:21:26.841129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.375 [2024-07-14 02:21:26.841154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.375 [2024-07-14 02:21:26.841171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.375 [2024-07-14 02:21:26.844737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.375 [2024-07-14 02:21:26.854003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.375 [2024-07-14 02:21:26.854449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.375 [2024-07-14 02:21:26.854480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.375 [2024-07-14 02:21:26.854498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.375 [2024-07-14 02:21:26.854737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.375 [2024-07-14 02:21:26.854993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.375 [2024-07-14 02:21:26.855019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.375 [2024-07-14 02:21:26.855035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.375 [2024-07-14 02:21:26.858600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.375 [2024-07-14 02:21:26.867879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.375 [2024-07-14 02:21:26.868345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.375 [2024-07-14 02:21:26.868377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.375 [2024-07-14 02:21:26.868395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.375 [2024-07-14 02:21:26.868633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.375 [2024-07-14 02:21:26.868889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.375 [2024-07-14 02:21:26.868922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.375 [2024-07-14 02:21:26.868938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.375 [2024-07-14 02:21:26.872519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.375 [2024-07-14 02:21:26.881862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.375 [2024-07-14 02:21:26.882353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.375 [2024-07-14 02:21:26.882382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.375 [2024-07-14 02:21:26.882397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.375 [2024-07-14 02:21:26.882648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.375 [2024-07-14 02:21:26.882904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.375 [2024-07-14 02:21:26.882933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.375 [2024-07-14 02:21:26.882949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.375 [2024-07-14 02:21:26.886518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.375 [2024-07-14 02:21:26.895786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.375 [2024-07-14 02:21:26.896230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.375 [2024-07-14 02:21:26.896262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.375 [2024-07-14 02:21:26.896280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.375 [2024-07-14 02:21:26.896517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.375 [2024-07-14 02:21:26.896760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.375 [2024-07-14 02:21:26.896785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.375 [2024-07-14 02:21:26.896802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.375 [2024-07-14 02:21:26.900380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.375 [2024-07-14 02:21:26.909644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.375 [2024-07-14 02:21:26.910105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.375 [2024-07-14 02:21:26.910138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.375 [2024-07-14 02:21:26.910157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.375 [2024-07-14 02:21:26.910396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.375 [2024-07-14 02:21:26.910638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.375 [2024-07-14 02:21:26.910663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.375 [2024-07-14 02:21:26.910680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.375 [2024-07-14 02:21:26.914263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.375 [2024-07-14 02:21:26.923519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.375 [2024-07-14 02:21:26.923985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.375 [2024-07-14 02:21:26.924014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.375 [2024-07-14 02:21:26.924030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.375 [2024-07-14 02:21:26.924279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.375 [2024-07-14 02:21:26.924528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.375 [2024-07-14 02:21:26.924553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.375 [2024-07-14 02:21:26.924569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.375 [2024-07-14 02:21:26.928146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.375 [2024-07-14 02:21:26.937400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.375 [2024-07-14 02:21:26.937835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.375 [2024-07-14 02:21:26.937862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.375 [2024-07-14 02:21:26.937904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.375 [2024-07-14 02:21:26.938147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.375 [2024-07-14 02:21:26.938389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.375 [2024-07-14 02:21:26.938415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.375 [2024-07-14 02:21:26.938431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.375 [2024-07-14 02:21:26.942003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.375 [2024-07-14 02:21:26.951263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.375 [2024-07-14 02:21:26.951734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.375 [2024-07-14 02:21:26.951762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.375 [2024-07-14 02:21:26.951778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.375 [2024-07-14 02:21:26.952041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.375 [2024-07-14 02:21:26.952284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.375 [2024-07-14 02:21:26.952309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.375 [2024-07-14 02:21:26.952325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.375 [2024-07-14 02:21:26.955900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.375 [2024-07-14 02:21:26.965157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.375 [2024-07-14 02:21:26.965621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.375 [2024-07-14 02:21:26.965649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.375 [2024-07-14 02:21:26.965665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.375 [2024-07-14 02:21:26.965928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.375 [2024-07-14 02:21:26.966171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.375 [2024-07-14 02:21:26.966196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.375 [2024-07-14 02:21:26.966213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.375 [2024-07-14 02:21:26.969782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.375 [2024-07-14 02:21:26.979054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.375 [2024-07-14 02:21:26.979520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.375 [2024-07-14 02:21:26.979552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.375 [2024-07-14 02:21:26.979569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.375 [2024-07-14 02:21:26.979807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.376 [2024-07-14 02:21:26.980064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.376 [2024-07-14 02:21:26.980090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.376 [2024-07-14 02:21:26.980106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.376 [2024-07-14 02:21:26.983671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.376 [2024-07-14 02:21:26.992947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.376 [2024-07-14 02:21:26.993414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.376 [2024-07-14 02:21:26.993446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.376 [2024-07-14 02:21:26.993464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.376 [2024-07-14 02:21:26.993702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.376 [2024-07-14 02:21:26.993957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.376 [2024-07-14 02:21:26.993982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.376 [2024-07-14 02:21:26.993998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.376 [2024-07-14 02:21:26.997564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.376 [2024-07-14 02:21:27.006901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.376 [2024-07-14 02:21:27.007370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.376 [2024-07-14 02:21:27.007398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.376 [2024-07-14 02:21:27.007413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.376 [2024-07-14 02:21:27.007656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.376 [2024-07-14 02:21:27.007910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.376 [2024-07-14 02:21:27.007934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.376 [2024-07-14 02:21:27.007951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.376 [2024-07-14 02:21:27.011522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.376 [2024-07-14 02:21:27.020789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.376 [2024-07-14 02:21:27.021264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.376 [2024-07-14 02:21:27.021292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.376 [2024-07-14 02:21:27.021313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.376 [2024-07-14 02:21:27.021571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.376 [2024-07-14 02:21:27.021815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.376 [2024-07-14 02:21:27.021839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.376 [2024-07-14 02:21:27.021856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.376 [2024-07-14 02:21:27.025436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.376 [2024-07-14 02:21:27.034715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.376 [2024-07-14 02:21:27.035184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.376 [2024-07-14 02:21:27.035216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.376 [2024-07-14 02:21:27.035234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.376 [2024-07-14 02:21:27.035473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.376 [2024-07-14 02:21:27.035716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.376 [2024-07-14 02:21:27.035740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.376 [2024-07-14 02:21:27.035756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.376 [2024-07-14 02:21:27.039337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.376 [2024-07-14 02:21:27.048612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.376 [2024-07-14 02:21:27.049054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.376 [2024-07-14 02:21:27.049081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.376 [2024-07-14 02:21:27.049097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.376 [2024-07-14 02:21:27.049330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.376 [2024-07-14 02:21:27.049589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.376 [2024-07-14 02:21:27.049614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.376 [2024-07-14 02:21:27.049630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.376 [2024-07-14 02:21:27.053207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.376 [2024-07-14 02:21:27.062633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.376 [2024-07-14 02:21:27.063106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.376 [2024-07-14 02:21:27.063156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.376 [2024-07-14 02:21:27.063188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.376 [2024-07-14 02:21:27.063437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.376 [2024-07-14 02:21:27.063641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.376 [2024-07-14 02:21:27.063662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.376 [2024-07-14 02:21:27.063676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.637 [2024-07-14 02:21:27.067138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.637 [2024-07-14 02:21:27.076679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.637 [2024-07-14 02:21:27.077096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-14 02:21:27.077126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.637 [2024-07-14 02:21:27.077143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.637 [2024-07-14 02:21:27.077394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.637 [2024-07-14 02:21:27.077638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.637 [2024-07-14 02:21:27.077659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.637 [2024-07-14 02:21:27.077673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.637 [2024-07-14 02:21:27.081266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.637 [2024-07-14 02:21:27.090246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.637 [2024-07-14 02:21:27.090631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-14 02:21:27.090669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.637 [2024-07-14 02:21:27.090704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.637 [2024-07-14 02:21:27.090975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.637 [2024-07-14 02:21:27.091220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.637 [2024-07-14 02:21:27.091245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.637 [2024-07-14 02:21:27.091261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.637 [2024-07-14 02:21:27.094905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.637 [2024-07-14 02:21:27.104241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.637 [2024-07-14 02:21:27.104749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-14 02:21:27.104800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.637 [2024-07-14 02:21:27.104818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.637 [2024-07-14 02:21:27.105067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.637 [2024-07-14 02:21:27.105318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.637 [2024-07-14 02:21:27.105343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.637 [2024-07-14 02:21:27.105359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.637 [2024-07-14 02:21:27.108961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.637 [2024-07-14 02:21:27.118067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.637 [2024-07-14 02:21:27.118574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-14 02:21:27.118624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.637 [2024-07-14 02:21:27.118642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.637 [2024-07-14 02:21:27.118891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.637 [2024-07-14 02:21:27.119108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.637 [2024-07-14 02:21:27.119129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.637 [2024-07-14 02:21:27.119160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.637 [2024-07-14 02:21:27.122695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.637 [2024-07-14 02:21:27.132054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.637 [2024-07-14 02:21:27.132588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-14 02:21:27.132637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.637 [2024-07-14 02:21:27.132656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.637 [2024-07-14 02:21:27.132916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.637 [2024-07-14 02:21:27.133124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.637 [2024-07-14 02:21:27.133160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.637 [2024-07-14 02:21:27.133173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.637 [2024-07-14 02:21:27.136698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.637 [2024-07-14 02:21:27.145913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.637 [2024-07-14 02:21:27.146408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-14 02:21:27.146458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.637 [2024-07-14 02:21:27.146477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.637 [2024-07-14 02:21:27.146715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.637 [2024-07-14 02:21:27.146969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.637 [2024-07-14 02:21:27.146993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.637 [2024-07-14 02:21:27.147010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.637 [2024-07-14 02:21:27.150571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.637 [2024-07-14 02:21:27.159840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.637 [2024-07-14 02:21:27.160345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-14 02:21:27.160394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.637 [2024-07-14 02:21:27.160418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.637 [2024-07-14 02:21:27.160657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.637 [2024-07-14 02:21:27.160909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.637 [2024-07-14 02:21:27.160936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.637 [2024-07-14 02:21:27.160952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.637 [2024-07-14 02:21:27.164516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.637 [2024-07-14 02:21:27.173810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.637 [2024-07-14 02:21:27.174261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-14 02:21:27.174290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.637 [2024-07-14 02:21:27.174306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.637 [2024-07-14 02:21:27.174549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.637 [2024-07-14 02:21:27.174792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.637 [2024-07-14 02:21:27.174817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.637 [2024-07-14 02:21:27.174833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.637 [2024-07-14 02:21:27.178422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.637 [2024-07-14 02:21:27.187698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.637 [2024-07-14 02:21:27.188164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-14 02:21:27.188196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.637 [2024-07-14 02:21:27.188214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.637 [2024-07-14 02:21:27.188451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.637 [2024-07-14 02:21:27.188693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.637 [2024-07-14 02:21:27.188719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.637 [2024-07-14 02:21:27.188735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.637 [2024-07-14 02:21:27.192313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.637 [2024-07-14 02:21:27.201569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.637 [2024-07-14 02:21:27.202000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-14 02:21:27.202032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.637 [2024-07-14 02:21:27.202051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.637 [2024-07-14 02:21:27.202290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.637 [2024-07-14 02:21:27.202534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.637 [2024-07-14 02:21:27.202568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.637 [2024-07-14 02:21:27.202586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.637 [2024-07-14 02:21:27.206165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.637 [2024-07-14 02:21:27.215463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.638 [2024-07-14 02:21:27.215900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-14 02:21:27.215934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.638 [2024-07-14 02:21:27.215953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.638 [2024-07-14 02:21:27.216192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.638 [2024-07-14 02:21:27.216434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.638 [2024-07-14 02:21:27.216460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.638 [2024-07-14 02:21:27.216476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.638 [2024-07-14 02:21:27.220056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.638 [2024-07-14 02:21:27.229315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.638 [2024-07-14 02:21:27.229755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-14 02:21:27.229788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.638 [2024-07-14 02:21:27.229806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.638 [2024-07-14 02:21:27.230058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.638 [2024-07-14 02:21:27.230302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.638 [2024-07-14 02:21:27.230327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.638 [2024-07-14 02:21:27.230343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.638 [2024-07-14 02:21:27.233915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.638 [2024-07-14 02:21:27.243173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.638 [2024-07-14 02:21:27.243633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-14 02:21:27.243665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.638 [2024-07-14 02:21:27.243683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.638 [2024-07-14 02:21:27.243935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.638 [2024-07-14 02:21:27.244177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.638 [2024-07-14 02:21:27.244203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.638 [2024-07-14 02:21:27.244220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.638 [2024-07-14 02:21:27.247784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.638 [2024-07-14 02:21:27.257050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.638 [2024-07-14 02:21:27.257511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-14 02:21:27.257542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.638 [2024-07-14 02:21:27.257560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.638 [2024-07-14 02:21:27.257798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.638 [2024-07-14 02:21:27.258053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.638 [2024-07-14 02:21:27.258079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.638 [2024-07-14 02:21:27.258096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.638 [2024-07-14 02:21:27.261660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.638 [2024-07-14 02:21:27.270930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.638 [2024-07-14 02:21:27.271402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-14 02:21:27.271430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.638 [2024-07-14 02:21:27.271446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.638 [2024-07-14 02:21:27.271689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.638 [2024-07-14 02:21:27.271955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.638 [2024-07-14 02:21:27.271982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.638 [2024-07-14 02:21:27.271998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.638 [2024-07-14 02:21:27.275567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.638 [2024-07-14 02:21:27.284835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.638 [2024-07-14 02:21:27.285296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-14 02:21:27.285328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.638 [2024-07-14 02:21:27.285346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.638 [2024-07-14 02:21:27.285585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.638 [2024-07-14 02:21:27.285827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.638 [2024-07-14 02:21:27.285852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.638 [2024-07-14 02:21:27.285879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.638 [2024-07-14 02:21:27.289448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.638 [2024-07-14 02:21:27.298701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.638 [2024-07-14 02:21:27.299144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-14 02:21:27.299177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.638 [2024-07-14 02:21:27.299196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.638 [2024-07-14 02:21:27.299441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.638 [2024-07-14 02:21:27.299686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.638 [2024-07-14 02:21:27.299711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.638 [2024-07-14 02:21:27.299727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.638 [2024-07-14 02:21:27.303305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.638 [2024-07-14 02:21:27.312561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.638 [2024-07-14 02:21:27.313019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-14 02:21:27.313052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.638 [2024-07-14 02:21:27.313069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.638 [2024-07-14 02:21:27.313308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.638 [2024-07-14 02:21:27.313550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.638 [2024-07-14 02:21:27.313576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.638 [2024-07-14 02:21:27.313592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.638 [2024-07-14 02:21:27.317176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.638 [2024-07-14 02:21:27.326651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.638 [2024-07-14 02:21:27.327134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-14 02:21:27.327166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.638 [2024-07-14 02:21:27.327185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.898 [2024-07-14 02:21:27.327423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.898 [2024-07-14 02:21:27.327667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.898 [2024-07-14 02:21:27.327693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.898 [2024-07-14 02:21:27.327710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.898 [2024-07-14 02:21:27.331342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.898 [2024-07-14 02:21:27.340500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.898 [2024-07-14 02:21:27.340937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.898 [2024-07-14 02:21:27.340970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.898 [2024-07-14 02:21:27.340989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.898 [2024-07-14 02:21:27.341228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.898 [2024-07-14 02:21:27.341471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.898 [2024-07-14 02:21:27.341496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.898 [2024-07-14 02:21:27.341519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.898 [2024-07-14 02:21:27.345095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.898 [2024-07-14 02:21:27.354351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.898 [2024-07-14 02:21:27.354817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.898 [2024-07-14 02:21:27.354850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.898 [2024-07-14 02:21:27.354878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.898 [2024-07-14 02:21:27.355120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.898 [2024-07-14 02:21:27.355363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.899 [2024-07-14 02:21:27.355388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.899 [2024-07-14 02:21:27.355404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.899 [2024-07-14 02:21:27.358980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.899 [2024-07-14 02:21:27.368240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.899 [2024-07-14 02:21:27.368692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.899 [2024-07-14 02:21:27.368724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.899 [2024-07-14 02:21:27.368742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.899 [2024-07-14 02:21:27.368993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.899 [2024-07-14 02:21:27.369237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.899 [2024-07-14 02:21:27.369263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.899 [2024-07-14 02:21:27.369279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.899 [2024-07-14 02:21:27.372851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.899 [2024-07-14 02:21:27.382140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.899 [2024-07-14 02:21:27.382602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.899 [2024-07-14 02:21:27.382630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.899 [2024-07-14 02:21:27.382646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.899 [2024-07-14 02:21:27.382906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.899 [2024-07-14 02:21:27.383149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.899 [2024-07-14 02:21:27.383175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.899 [2024-07-14 02:21:27.383191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.899 [2024-07-14 02:21:27.386756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.899 [2024-07-14 02:21:27.396027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.899 [2024-07-14 02:21:27.396485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.899 [2024-07-14 02:21:27.396523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.899 [2024-07-14 02:21:27.396542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.899 [2024-07-14 02:21:27.396781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.899 [2024-07-14 02:21:27.397038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.899 [2024-07-14 02:21:27.397064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.899 [2024-07-14 02:21:27.397081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.899 [2024-07-14 02:21:27.400645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.899 [2024-07-14 02:21:27.409913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.899 [2024-07-14 02:21:27.410380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.899 [2024-07-14 02:21:27.410422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.899 [2024-07-14 02:21:27.410439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.899 [2024-07-14 02:21:27.410678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.899 [2024-07-14 02:21:27.410936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.899 [2024-07-14 02:21:27.410965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.899 [2024-07-14 02:21:27.410981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.899 [2024-07-14 02:21:27.414561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.899 [2024-07-14 02:21:27.423875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.899 [2024-07-14 02:21:27.424310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.899 [2024-07-14 02:21:27.424342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.899 [2024-07-14 02:21:27.424360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.899 [2024-07-14 02:21:27.424599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.899 [2024-07-14 02:21:27.424842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.899 [2024-07-14 02:21:27.424876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.899 [2024-07-14 02:21:27.424895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.899 [2024-07-14 02:21:27.428466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.899 [2024-07-14 02:21:27.437723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.899 [2024-07-14 02:21:27.438165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.899 [2024-07-14 02:21:27.438197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.899 [2024-07-14 02:21:27.438215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.899 [2024-07-14 02:21:27.438453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.899 [2024-07-14 02:21:27.438702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.899 [2024-07-14 02:21:27.438728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.899 [2024-07-14 02:21:27.438744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.899 [2024-07-14 02:21:27.442321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.899 [2024-07-14 02:21:27.451589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.899 [2024-07-14 02:21:27.452053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.899 [2024-07-14 02:21:27.452086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.899 [2024-07-14 02:21:27.452104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.899 [2024-07-14 02:21:27.452344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.899 [2024-07-14 02:21:27.452588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.899 [2024-07-14 02:21:27.452612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.899 [2024-07-14 02:21:27.452628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.899 [2024-07-14 02:21:27.456201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.899 [2024-07-14 02:21:27.465452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.899 [2024-07-14 02:21:27.465891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.899 [2024-07-14 02:21:27.465919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.899 [2024-07-14 02:21:27.465934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.899 [2024-07-14 02:21:27.466166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.899 [2024-07-14 02:21:27.466409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.899 [2024-07-14 02:21:27.466434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.899 [2024-07-14 02:21:27.466450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.899 [2024-07-14 02:21:27.470027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.899 [2024-07-14 02:21:27.479310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.899 [2024-07-14 02:21:27.479757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.899 [2024-07-14 02:21:27.479791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.899 [2024-07-14 02:21:27.479809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.899 [2024-07-14 02:21:27.480060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.899 [2024-07-14 02:21:27.480303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.899 [2024-07-14 02:21:27.480329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.899 [2024-07-14 02:21:27.480346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.899 [2024-07-14 02:21:27.483934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.899 [2024-07-14 02:21:27.493204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.899 [2024-07-14 02:21:27.493637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.899 [2024-07-14 02:21:27.493665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.899 [2024-07-14 02:21:27.493680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.899 [2024-07-14 02:21:27.493944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.899 [2024-07-14 02:21:27.494145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.899 [2024-07-14 02:21:27.494184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.899 [2024-07-14 02:21:27.494200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.899 [2024-07-14 02:21:27.497771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.899 [2024-07-14 02:21:27.507052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.899 [2024-07-14 02:21:27.507507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.899 [2024-07-14 02:21:27.507538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.899 [2024-07-14 02:21:27.507556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.899 [2024-07-14 02:21:27.507795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.900 [2024-07-14 02:21:27.508050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.900 [2024-07-14 02:21:27.508075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.900 [2024-07-14 02:21:27.508092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.900 [2024-07-14 02:21:27.511656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.900 [2024-07-14 02:21:27.520927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.900 [2024-07-14 02:21:27.521429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.900 [2024-07-14 02:21:27.521456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.900 [2024-07-14 02:21:27.521471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.900 [2024-07-14 02:21:27.521725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.900 [2024-07-14 02:21:27.521978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.900 [2024-07-14 02:21:27.522003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.900 [2024-07-14 02:21:27.522020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1741990 Killed "${NVMF_APP[@]}" "$@" 00:34:21.900 02:21:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:21.900 02:21:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:21.900 02:21:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:21.900 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:21.900 [2024-07-14 02:21:27.525587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.900 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:21.900 02:21:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1742938 00:34:21.900 02:21:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:21.900 02:21:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1742938 00:34:21.900 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1742938 ']' 00:34:21.900 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.900 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:21.900 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.900 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:21.900 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:21.900 [2024-07-14 02:21:27.535003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.900 [2024-07-14 02:21:27.535465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.900 [2024-07-14 02:21:27.535492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.900 [2024-07-14 02:21:27.535509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.900 [2024-07-14 02:21:27.535756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.900 [2024-07-14 02:21:27.536018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.900 [2024-07-14 02:21:27.536041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.900 [2024-07-14 02:21:27.536056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.900 [2024-07-14 02:21:27.539672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.900 [2024-07-14 02:21:27.549028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.900 [2024-07-14 02:21:27.549490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.900 [2024-07-14 02:21:27.549522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.900 [2024-07-14 02:21:27.549541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.900 [2024-07-14 02:21:27.549779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.900 [2024-07-14 02:21:27.550031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.900 [2024-07-14 02:21:27.550054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.900 [2024-07-14 02:21:27.550069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.900 [2024-07-14 02:21:27.553609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.900 [2024-07-14 02:21:27.562881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.900 [2024-07-14 02:21:27.563367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.900 [2024-07-14 02:21:27.563399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.900 [2024-07-14 02:21:27.563423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.900 [2024-07-14 02:21:27.563663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.900 [2024-07-14 02:21:27.563927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.900 [2024-07-14 02:21:27.563949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.900 [2024-07-14 02:21:27.563963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.900 [2024-07-14 02:21:27.566881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.900 [2024-07-14 02:21:27.576780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.900 [2024-07-14 02:21:27.577204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.900 [2024-07-14 02:21:27.577233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:21.900 [2024-07-14 02:21:27.577250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:21.900 [2024-07-14 02:21:27.577487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:21.900 [2024-07-14 02:21:27.577730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.900 [2024-07-14 02:21:27.577755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] contr[2024-07-14 02:21:27.577733] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:21.900 oller reinitialization failed 00:34:21.900 [2024-07-14 02:21:27.577775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.900 [2024-07-14 02:21:27.577800] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:21.900 [2024-07-14 02:21:27.581281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.159 [2024-07-14 02:21:27.590841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.159 [2024-07-14 02:21:27.591291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.159 [2024-07-14 02:21:27.591320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.159 [2024-07-14 02:21:27.591354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.159 [2024-07-14 02:21:27.591593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.159 [2024-07-14 02:21:27.591837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.159 [2024-07-14 02:21:27.591861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.159 [2024-07-14 02:21:27.591889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.159 [2024-07-14 02:21:27.595537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.159 [2024-07-14 02:21:27.604790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.159 [2024-07-14 02:21:27.605280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.159 [2024-07-14 02:21:27.605312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.159 [2024-07-14 02:21:27.605331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.159 [2024-07-14 02:21:27.605576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.159 [2024-07-14 02:21:27.605819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.159 [2024-07-14 02:21:27.605844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.159 [2024-07-14 02:21:27.605859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.159 [2024-07-14 02:21:27.609344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.159 [2024-07-14 02:21:27.618598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.159 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.159 [2024-07-14 02:21:27.619014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.159 [2024-07-14 02:21:27.619040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.159 [2024-07-14 02:21:27.619056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.159 [2024-07-14 02:21:27.619270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.159 [2024-07-14 02:21:27.619530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.159 [2024-07-14 02:21:27.619554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.160 [2024-07-14 02:21:27.619570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.160 [2024-07-14 02:21:27.623097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.160 [2024-07-14 02:21:27.632463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.160 [2024-07-14 02:21:27.632945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.160 [2024-07-14 02:21:27.632974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.160 [2024-07-14 02:21:27.632991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.160 [2024-07-14 02:21:27.633244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.160 [2024-07-14 02:21:27.633487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.160 [2024-07-14 02:21:27.633512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.160 [2024-07-14 02:21:27.633528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.160 [2024-07-14 02:21:27.637027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.160 [2024-07-14 02:21:27.646272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.160 [2024-07-14 02:21:27.646742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.160 [2024-07-14 02:21:27.646774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.160 [2024-07-14 02:21:27.646792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.160 [2024-07-14 02:21:27.647067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.160 [2024-07-14 02:21:27.647301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.160 [2024-07-14 02:21:27.647331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.160 [2024-07-14 02:21:27.647348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.160 [2024-07-14 02:21:27.650833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.160 [2024-07-14 02:21:27.652949] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:22.160 [2024-07-14 02:21:27.660093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.160 [2024-07-14 02:21:27.660644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.160 [2024-07-14 02:21:27.660680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.160 [2024-07-14 02:21:27.660700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.160 [2024-07-14 02:21:27.660961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.160 [2024-07-14 02:21:27.661181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.160 [2024-07-14 02:21:27.661219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.160 [2024-07-14 02:21:27.661238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.160 [2024-07-14 02:21:27.664740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.160 [2024-07-14 02:21:27.674040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.160 [2024-07-14 02:21:27.674587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.160 [2024-07-14 02:21:27.674628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.160 [2024-07-14 02:21:27.674650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.160 [2024-07-14 02:21:27.674909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.160 [2024-07-14 02:21:27.675136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.160 [2024-07-14 02:21:27.675159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.160 [2024-07-14 02:21:27.675191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.160 [2024-07-14 02:21:27.678698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.160 [2024-07-14 02:21:27.687950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.160 [2024-07-14 02:21:27.688397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.160 [2024-07-14 02:21:27.688429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.160 [2024-07-14 02:21:27.688447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.160 [2024-07-14 02:21:27.688687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.160 [2024-07-14 02:21:27.688950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.160 [2024-07-14 02:21:27.688973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.160 [2024-07-14 02:21:27.688988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.160 [2024-07-14 02:21:27.692475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.160 [2024-07-14 02:21:27.701747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.160 [2024-07-14 02:21:27.702377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.160 [2024-07-14 02:21:27.702428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.160 [2024-07-14 02:21:27.702451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.160 [2024-07-14 02:21:27.702704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.160 [2024-07-14 02:21:27.702967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.160 [2024-07-14 02:21:27.702989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.160 [2024-07-14 02:21:27.703004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.160 [2024-07-14 02:21:27.706494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.160 [2024-07-14 02:21:27.715340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.160 [2024-07-14 02:21:27.715961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.160 [2024-07-14 02:21:27.715999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.160 [2024-07-14 02:21:27.716021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.160 [2024-07-14 02:21:27.716281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.160 [2024-07-14 02:21:27.716530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.160 [2024-07-14 02:21:27.716556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.160 [2024-07-14 02:21:27.716575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.160 [2024-07-14 02:21:27.720127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.160 [2024-07-14 02:21:27.728831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.160 [2024-07-14 02:21:27.729352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.160 [2024-07-14 02:21:27.729383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.160 [2024-07-14 02:21:27.729403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.160 [2024-07-14 02:21:27.729654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.160 [2024-07-14 02:21:27.729877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.160 [2024-07-14 02:21:27.729899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.160 [2024-07-14 02:21:27.729929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.160 [2024-07-14 02:21:27.733365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.160 [2024-07-14 02:21:27.742832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.160 [2024-07-14 02:21:27.743357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.160 [2024-07-14 02:21:27.743386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.160 [2024-07-14 02:21:27.743404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.160 [2024-07-14 02:21:27.743670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.160 [2024-07-14 02:21:27.743937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.160 [2024-07-14 02:21:27.743958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.160 [2024-07-14 02:21:27.743973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.160 [2024-07-14 02:21:27.745601] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.160 [2024-07-14 02:21:27.745636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.160 [2024-07-14 02:21:27.745653] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.160 [2024-07-14 02:21:27.745667] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.160 [2024-07-14 02:21:27.745679] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.160 [2024-07-14 02:21:27.745781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:22.160 [2024-07-14 02:21:27.745885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:22.160 [2024-07-14 02:21:27.745889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.160 [2024-07-14 02:21:27.747152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.160 [2024-07-14 02:21:27.756332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.160 [2024-07-14 02:21:27.756930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.160 [2024-07-14 02:21:27.756968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.160 [2024-07-14 02:21:27.756989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.160 [2024-07-14 02:21:27.757240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.160 [2024-07-14 02:21:27.757450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.161 [2024-07-14 02:21:27.757472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.161 [2024-07-14 02:21:27.757489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.161 [2024-07-14 02:21:27.760640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.161 [2024-07-14 02:21:27.769933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.161 [2024-07-14 02:21:27.770521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.161 [2024-07-14 02:21:27.770560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.161 [2024-07-14 02:21:27.770581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.161 [2024-07-14 02:21:27.770832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.161 [2024-07-14 02:21:27.771072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.161 [2024-07-14 02:21:27.771094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.161 [2024-07-14 02:21:27.771111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.161 [2024-07-14 02:21:27.774278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.161 [2024-07-14 02:21:27.783393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.161 [2024-07-14 02:21:27.783968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.161 [2024-07-14 02:21:27.784007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.161 [2024-07-14 02:21:27.784028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.161 [2024-07-14 02:21:27.784280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.161 [2024-07-14 02:21:27.784490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.161 [2024-07-14 02:21:27.784512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.161 [2024-07-14 02:21:27.784528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.161 [2024-07-14 02:21:27.787662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.161 [2024-07-14 02:21:27.796947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.161 [2024-07-14 02:21:27.797599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.161 [2024-07-14 02:21:27.797637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.161 [2024-07-14 02:21:27.797658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.161 [2024-07-14 02:21:27.797934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.161 [2024-07-14 02:21:27.798152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.161 [2024-07-14 02:21:27.798174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.161 [2024-07-14 02:21:27.798190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.161 [2024-07-14 02:21:27.801339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.161 [2024-07-14 02:21:27.810427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.161 [2024-07-14 02:21:27.811088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.161 [2024-07-14 02:21:27.811127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.161 [2024-07-14 02:21:27.811149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.161 [2024-07-14 02:21:27.811397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.161 [2024-07-14 02:21:27.811608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.161 [2024-07-14 02:21:27.811629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.161 [2024-07-14 02:21:27.811646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.161 [2024-07-14 02:21:27.814811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.161 [2024-07-14 02:21:27.823926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.161 [2024-07-14 02:21:27.824434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.161 [2024-07-14 02:21:27.824468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.161 [2024-07-14 02:21:27.824487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.161 [2024-07-14 02:21:27.824743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.161 [2024-07-14 02:21:27.824990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.161 [2024-07-14 02:21:27.825013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.161 [2024-07-14 02:21:27.825029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.161 [2024-07-14 02:21:27.828212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.161 [2024-07-14 02:21:27.837554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.161 [2024-07-14 02:21:27.837973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.161 [2024-07-14 02:21:27.838003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.161 [2024-07-14 02:21:27.838020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.161 [2024-07-14 02:21:27.838250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.161 [2024-07-14 02:21:27.838464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.161 [2024-07-14 02:21:27.838486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.161 [2024-07-14 02:21:27.838499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.161 [2024-07-14 02:21:27.841741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.419 [2024-07-14 02:21:27.851458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.419 [2024-07-14 02:21:27.851884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.419 [2024-07-14 02:21:27.851915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.419 [2024-07-14 02:21:27.851933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.420 [2024-07-14 02:21:27.852162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.420 [2024-07-14 02:21:27.852376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.420 [2024-07-14 02:21:27.852397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.420 [2024-07-14 02:21:27.852411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.420 [2024-07-14 02:21:27.855809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.420 [2024-07-14 02:21:27.864939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.420 [2024-07-14 02:21:27.865472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.420 [2024-07-14 02:21:27.865501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.420 [2024-07-14 02:21:27.865518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.420 [2024-07-14 02:21:27.865764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.420 [2024-07-14 02:21:27.866005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.420 [2024-07-14 02:21:27.866029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.420 [2024-07-14 02:21:27.866043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.420 [2024-07-14 02:21:27.869270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.420 [2024-07-14 02:21:27.877387] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.420 [2024-07-14 02:21:27.878560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.420 [2024-07-14 02:21:27.878973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.420 [2024-07-14 02:21:27.879002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.420 [2024-07-14 02:21:27.879019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.420 [2024-07-14 02:21:27.879273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.420 [2024-07-14 02:21:27.879479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.420 [2024-07-14 02:21:27.879500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.420 [2024-07-14 02:21:27.879513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.420 [2024-07-14 02:21:27.882808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.420 [2024-07-14 02:21:27.892150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.420 [2024-07-14 02:21:27.892617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.420 [2024-07-14 02:21:27.892646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.420 [2024-07-14 02:21:27.892662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.420 [2024-07-14 02:21:27.892929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.420 [2024-07-14 02:21:27.893148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.420 [2024-07-14 02:21:27.893173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.420 [2024-07-14 02:21:27.893201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.420 [2024-07-14 02:21:27.896299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.420 [2024-07-14 02:21:27.905607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.420 [2024-07-14 02:21:27.906233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.420 [2024-07-14 02:21:27.906273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.420 [2024-07-14 02:21:27.906295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.420 [2024-07-14 02:21:27.906543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.420 [2024-07-14 02:21:27.906753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.420 [2024-07-14 02:21:27.906774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.420 [2024-07-14 02:21:27.906790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.420 [2024-07-14 02:21:27.909987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.420 Malloc0 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.420 [2024-07-14 02:21:27.919300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.420 [2024-07-14 02:21:27.919849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.420 [2024-07-14 02:21:27.919886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.420 [2024-07-14 02:21:27.919906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.420 [2024-07-14 02:21:27.920125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.420 [2024-07-14 02:21:27.920364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.420 [2024-07-14 02:21:27.920386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.420 [2024-07-14 02:21:27.920400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.420 [2024-07-14 02:21:27.923688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.420 [2024-07-14 02:21:27.932980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.420 [2024-07-14 02:21:27.933397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.420 [2024-07-14 02:21:27.933428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c2ed0 with addr=10.0.0.2, port=4420 00:34:22.420 [2024-07-14 02:21:27.933445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2ed0 is same with the state(5) to be set 00:34:22.420 [2024-07-14 02:21:27.933694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2ed0 (9): Bad file descriptor 00:34:22.420 [2024-07-14 02:21:27.933942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.420 [2024-07-14 02:21:27.933965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.420 [2024-07-14 02:21:27.933980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.420 [2024-07-14 02:21:27.934891] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.420 [2024-07-14 02:21:27.937281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.420 02:21:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1742271 00:34:22.420 [2024-07-14 02:21:27.946450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.420 [2024-07-14 02:21:28.023094] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:32.399 00:34:32.399 Latency(us) 00:34:32.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.399 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:32.399 Verification LBA range: start 0x0 length 0x4000 00:34:32.399 Nvme1n1 : 15.01 6724.28 26.27 9123.40 0.00 8052.61 837.40 20583.16 00:34:32.399 =================================================================================================================== 00:34:32.399 Total : 6724.28 26.27 9123.40 0.00 8052.61 837.40 20583.16 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:32.399 rmmod nvme_tcp 00:34:32.399 rmmod nvme_fabrics 00:34:32.399 rmmod nvme_keyring 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1742938 ']' 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1742938 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1742938 ']' 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1742938 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1742938 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1742938' 00:34:32.399 killing process with pid 1742938 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1742938 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1742938 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:32.399 02:21:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.302 02:21:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:34.302 00:34:34.302 real 0m22.172s 00:34:34.302 user 0m58.031s 00:34:34.302 sys 0m4.826s 00:34:34.302 02:21:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:34.302 02:21:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:34.302 ************************************ 00:34:34.302 END TEST nvmf_bdevperf 00:34:34.302 ************************************ 00:34:34.302 02:21:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:34.302 02:21:39 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:34.302 02:21:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:34.302 02:21:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:34.302 02:21:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:34.302 ************************************ 00:34:34.302 START TEST nvmf_target_disconnect 00:34:34.302 ************************************ 00:34:34.302 02:21:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:34.302 * Looking for test storage... 00:34:34.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:34.302 02:21:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.302 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:34.302 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.302 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:34.303 02:21:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:36.208 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:36.208 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:36.208 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:36.209 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:36.209 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:36.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:36.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:34:36.209 00:34:36.209 --- 10.0.0.2 ping statistics --- 00:34:36.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:36.209 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:36.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:36.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:34:36.209 00:34:36.209 --- 10.0.0.1 ping statistics --- 00:34:36.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:36.209 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:36.209 ************************************ 00:34:36.209 START TEST nvmf_target_disconnect_tc1 00:34:36.209 ************************************ 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:36.209 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.209 [2024-07-14 02:21:41.844818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.209 [2024-07-14 02:21:41.844902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1683e70 with addr=10.0.0.2, port=4420 00:34:36.209 [2024-07-14 02:21:41.844935] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:36.209 [2024-07-14 02:21:41.844959] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:36.209 [2024-07-14 02:21:41.844972] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:36.209 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:36.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:36.209 Initializing NVMe Controllers 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:36.209 00:34:36.209 real 0m0.098s 00:34:36.209 user 0m0.042s 00:34:36.209 sys 0m0.055s 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:36.209 ************************************ 00:34:36.209 END TEST nvmf_target_disconnect_tc1 00:34:36.209 ************************************ 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:36.209 ************************************ 00:34:36.209 START TEST nvmf_target_disconnect_tc2 00:34:36.209 ************************************ 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:36.209 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:36.470 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:36.470 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.470 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1746083 00:34:36.470 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:36.470 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1746083 00:34:36.470 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1746083 ']' 00:34:36.470 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.470 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:36.470 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.470 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:36.470 02:21:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.470 [2024-07-14 02:21:41.950650] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:36.470 [2024-07-14 02:21:41.950743] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:36.470 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.470 [2024-07-14 02:21:42.016257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:36.470 [2024-07-14 02:21:42.108308] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:36.470 [2024-07-14 02:21:42.108368] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:36.470 [2024-07-14 02:21:42.108390] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:36.470 [2024-07-14 02:21:42.108407] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:36.470 [2024-07-14 02:21:42.108421] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:36.470 [2024-07-14 02:21:42.108514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:36.470 [2024-07-14 02:21:42.108591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:36.470 [2024-07-14 02:21:42.108664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:36.470 [2024-07-14 02:21:42.108657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.730 Malloc0 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.730 [2024-07-14 02:21:42.290085] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.730 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.731 [2024-07-14 02:21:42.318328] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.731 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.731 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:36.731 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.731 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.731 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.731 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1746111 00:34:36.731 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:36.731 02:21:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:36.731 EAL: No free 2048 kB hugepages reported on node 1 00:34:39.295 02:21:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1746083 00:34:39.295 02:21:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 [2024-07-14 02:21:44.343907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 [2024-07-14 02:21:44.344258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Write completed with error (sct=0, sc=8) 00:34:39.295 starting I/O failed 00:34:39.295 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 [2024-07-14 02:21:44.344596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Read completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 Write completed with error (sct=0, sc=8) 00:34:39.296 starting I/O failed 00:34:39.296 [2024-07-14 02:21:44.344885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:39.296 [2024-07-14 02:21:44.345097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.345137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.345320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.345347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.345558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.345583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.345764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.345789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.345951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.345978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.346154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.346178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.346394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.346420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.346590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.346615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.346826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.346851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.347028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.347059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.347204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.347229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.347428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.347453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.347655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.347681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.347835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.347860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.348049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.348075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.348263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.348289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.348473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.348498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.348694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.348718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.348893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.348931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.349096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.349122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.349390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.349416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.349592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.349618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.349796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.349822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.350022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.350048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.350204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.296 [2024-07-14 02:21:44.350231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.296 qpair failed and we were unable to recover it. 00:34:39.296 [2024-07-14 02:21:44.350404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.350431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.350619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.350645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.350825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.350852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.351062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.351102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.351270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.351298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.351502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.351528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.351711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.351736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.351890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.351925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.352134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.352159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.352347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.352373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.352550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.352576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.352766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.352792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.352946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.352973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.353152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.353178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.353360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.353385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.353535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.353574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.353769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.353796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.353981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.354008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.354190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.354216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.354402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.354427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.354582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.354608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.354806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.354831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.355001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.355027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.355186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.355211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.355395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.355428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.355651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.355676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.355832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.355857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.356053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.356078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.356264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.356289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.356472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.356496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.356675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.356700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.356901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.356928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.357104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.357130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.357280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.357306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.357492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.357518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.357690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.357715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.357923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.357950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.358098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.358123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.358304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.358330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.358505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.358531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.297 qpair failed and we were unable to recover it. 00:34:39.297 [2024-07-14 02:21:44.358709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.297 [2024-07-14 02:21:44.358735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.358948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.358975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.359124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.359149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.359340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.359366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.359569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.359595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.359777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.359802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.359978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.360004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.360183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.360210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.360361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.360387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.360540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.360565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.360761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.360789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.360984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.361023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.361211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.361238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.361395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.361421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.361593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.361636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.361913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.361940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.362091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.362117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.362338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.362380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.362587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.362612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.362764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.362790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.362979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.363006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.363155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.363181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.363379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.363427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.363627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.363656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.363826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.363857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.364038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.364064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.364211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.364237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.364439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.364482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.364812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.364862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.365056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.365085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.365262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.365289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.365468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.365495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.365685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.365711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.365915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.365941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.366128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.366154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.366386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.366429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.366709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.366735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.366890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.366916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.367106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.367133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.367339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.367364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.298 [2024-07-14 02:21:44.367588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.298 [2024-07-14 02:21:44.367631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.298 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.367782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.367808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.367957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.367984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.368193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.368237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.368490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.368533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.368693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.368719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.368879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.368905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.369109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.369135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.369319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.369344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.369543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.369569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.369770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.369795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.370004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.370043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.370249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.370277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.370470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.370498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.370696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.370724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.370910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.370935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.371106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.371131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.371287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.371311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.371528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.371555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.371748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.371775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.371973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.371998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.372174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.372198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.372354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.372396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.372601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.372625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.372826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.372850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.373034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.373058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.373235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.373259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.373408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.373433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.373631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.373655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.373833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.373857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.374020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.374044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.374240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.374267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.374488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.374515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.299 qpair failed and we were unable to recover it. 00:34:39.299 [2024-07-14 02:21:44.374722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.299 [2024-07-14 02:21:44.374746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.374926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.374952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.375099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.375123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.375300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.375324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.375483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.375509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.375733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.375770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.375959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.375984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.376167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.376192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.376369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.376394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.376610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.376637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.376841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.376872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.377076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.377101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.377278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.377305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.377477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.377501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.377658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.377682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.377880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.377905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.378060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.378085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.378242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.378266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.378433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.378457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.378635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.378663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.378863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.378894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.379062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.379087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.379294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.379321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.379583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.379624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.379826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.379850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.380059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.380083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.380265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.380289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.380488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.380515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.380707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.380734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.380930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.380955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.381107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.381132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.381285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.381325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.381516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.381545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.381740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.381768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.381956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.381983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.382183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.382207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.382387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.382412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.382581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.382605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.382752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.382777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.382957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.382982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.383162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.383186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-14 02:21:44.383358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.300 [2024-07-14 02:21:44.383383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.383555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.383579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.383758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.383783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.383961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.383986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.384164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.384206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.384376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.384404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.384602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.384627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.384816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.384843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.385023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.385048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.385200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.385225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.385419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.385445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.385634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.385661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.385827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.385852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.386057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.386085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.386250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.386277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.386498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.386522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.386725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.386750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.386975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.387003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.387200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.387224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.387407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.387436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.387635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.387663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.387900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.387925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.388110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.388137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.388303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.388330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.388533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.388558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.388774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.388801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.388988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.389013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.389190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.389215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.389391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.389415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.389586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.389613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.389786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.389810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.390034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.390062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.390237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.390264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.390456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.390480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.390679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.390706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.390899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.390927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.391127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.391152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.391370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.391397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.391561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.391588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.391784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.391809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.391968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.301 [2024-07-14 02:21:44.391992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-14 02:21:44.392163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.392191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.392382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.392406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.392562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.392589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.392776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.392803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.393028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.393053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.393261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.393288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.393478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.393505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.393696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.393720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.393885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.393913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.394105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.394132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.394333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.394357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.394555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.394582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.394801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.394828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.395062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.395087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.395262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.395289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.395519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.395544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.395726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.395750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.395924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.395953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.396119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.396153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.396354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.396380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.396563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.396588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.396791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.396818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.397025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.397050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.397229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.397256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.397454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.397482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.397682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.397707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.397940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.397965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.398138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.398180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.398387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.398412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.398612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.398639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.398831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.398859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.399041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.399065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.399218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.399243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.399393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.399417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.399587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.399611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.399814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.399839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.400001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.400026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.400178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.400203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.400376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.400401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.400597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.400622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.400857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.302 [2024-07-14 02:21:44.400900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-14 02:21:44.401109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.401136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.401343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.401371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.401546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.401572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.401725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.401750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.401973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.402006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.402178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.402203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.402402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.402429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.402621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.402645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.402827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.402851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.403004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.403029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.403175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.403199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.403380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.403404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.403601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.403628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.403848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.403881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.404085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.404110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.404285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.404310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.404487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.404511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.404653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.404677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.404835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.404860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.405015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.405039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.405215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.405239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.405439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.405468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.405695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.405723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.405901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.405927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.406077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.406102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.406302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.406331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.406531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.406556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.406749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.406776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.406997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.407025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.407229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.407253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.407403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.407428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.407603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.407632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.407807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.407832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.408006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.408034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.408211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.408235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.408416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.408441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.408674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.408701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.408870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.408898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.303 qpair failed and we were unable to recover it. 00:34:39.303 [2024-07-14 02:21:44.409097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.303 [2024-07-14 02:21:44.409121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.409322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.409349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.409544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.409571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.409794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.409818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.409970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.409995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.410141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.410183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.410376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.410400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.410578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.410603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.410803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.410830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.411061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.411087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.411237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.411261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.411481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.411508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.411706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.411752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.411973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.411998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.412196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.412223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.412411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.412435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.412602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.412629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.412822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.412847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.413056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.413080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.413291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.413315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.413493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.413517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.413695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.413720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.413918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.413946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.414114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.414141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.414309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.414335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.414556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.414583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.414746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.414775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.415001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.415027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.415194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.415222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.415444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.415471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.415667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.415692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.415864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.415898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.416083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.416111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.416308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.416332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.416559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.416587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.416782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.416810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.417012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.417037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.417237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.304 [2024-07-14 02:21:44.417264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.304 qpair failed and we were unable to recover it. 00:34:39.304 [2024-07-14 02:21:44.417424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.417451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.417648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.417672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.417828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.417855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.418065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.418093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.418280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.418304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.418498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.418525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.418681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.418708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.418910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.418936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.419120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.419144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.419322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.419347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.419524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.419549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.419709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.419738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.419926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.419951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.420154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.420178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.420405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.420432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.420632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.420656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.420834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.420859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.421064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.421091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.421257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.421286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.421509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.421534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.421732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.421759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.421953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.421982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.422149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.422173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.422375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.422404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.422576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.422604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.422803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.422827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.423044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.423072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.423270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.423297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.423490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.423514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.423712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.423739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.423932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.423960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.424184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.424208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.424407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.424434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.424626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.424653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.424838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.424863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.425097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.425125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.425321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.425348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.425553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.425578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.425774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.425801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.425991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.426017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.305 [2024-07-14 02:21:44.426193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.305 [2024-07-14 02:21:44.426217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.305 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.426413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.426441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.426629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.426657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.426857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.426889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.427089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.427114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.427326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.427354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.427552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.427576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.427776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.427803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.427978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.428014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.428164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.428188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.428382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.428413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.428606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.428633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.428793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.428817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.429058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.429086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.429305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.429333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.429529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.429553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.429748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.429775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.429944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.429972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.430170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.430195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.430351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.430376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.430523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.430565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.430759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.430783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.430981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.431008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.431185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.431209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.431362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.431387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.431591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.431618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.431809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.431837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.432018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.432043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.432238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.432265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.432426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.432453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.432649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.432674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.432861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.432896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.433091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.433118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.433315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.433340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.433530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.433557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.433748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.433775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.433998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.434024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.434222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.434249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.434454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.434478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.434649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.434674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.434825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.434849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.306 [2024-07-14 02:21:44.435051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.306 [2024-07-14 02:21:44.435078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.306 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.435279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.435303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.435472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.435497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.435689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.435716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.435989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.436014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.436204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.436231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.436424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.436451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.436644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.436668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.436823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.436847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.437050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.437077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.437246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.437271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.437467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.437494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.437712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.437739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.437945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.437971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.438173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.438200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.438418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.438446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.438640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.438664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.438842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.438875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.439041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.439069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.439259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.439284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.439481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.439508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.439689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.439716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.439879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.439904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.440074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.440099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.440262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.440306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.440495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.440519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.440691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.440715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.440906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.440934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.441139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.441164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.441335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.441360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.441564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.441591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.441783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.441807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.441982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.442010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.442205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.442233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.442413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.442437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.442661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.442688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.442884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.442912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.443090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.443118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.443278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.443303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.443480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.443504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.443676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.443700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.307 qpair failed and we were unable to recover it. 00:34:39.307 [2024-07-14 02:21:44.443887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.307 [2024-07-14 02:21:44.443928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.444084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.444108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.444281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.444305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.444524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.444551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.444779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.444806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.444999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.445024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.445247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.445274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.445439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.445466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.445660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.445685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.445845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.445881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.446044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.446069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.446210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.446235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.446382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.446406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.446634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.446662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.446859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.446892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.447046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.447070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.447266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.447294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.447470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.447495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.447641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.447666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.447842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.447877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.448051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.448076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.448268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.448297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.448482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.448510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.448711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.448739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.448940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.448965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.449169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.449197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.449416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.449440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.449634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.449662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.449831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.449858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.450058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.450083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.450251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.450278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.450443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.450470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.450667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.450691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.450892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.450933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.451085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.451109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.308 qpair failed and we were unable to recover it. 00:34:39.308 [2024-07-14 02:21:44.451308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.308 [2024-07-14 02:21:44.451333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.451524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.451551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.451778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.451805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.451991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.452017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.452214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.452242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.452471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.452496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.452698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.452723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.452915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.452940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.453115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.453139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.453289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.453313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.453507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.453534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.453696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.453723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.453915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.453940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.454112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.454140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.454329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.454356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.454529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.454558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.454701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.454725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.454877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.454919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.455086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.455111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.455287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.455312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.455461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.455486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.455640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.455664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.455871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.455896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.456120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.456147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.456348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.456373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.456596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.456622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.456827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.456854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.457033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.457058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.457225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.457252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.457429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.457456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.457625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.457650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.457826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.457850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.458034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.458062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.458230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.458255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.458413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.458437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.458589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.458614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.458787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.458812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.458972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.458998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.459170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.459194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.459370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.459395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.309 [2024-07-14 02:21:44.459615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.309 [2024-07-14 02:21:44.459642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.309 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.459811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.459841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.460043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.460069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.460219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.460244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.460431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.460458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.460679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.460704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.460879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.460907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.461079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.461108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.461307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.461332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.461500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.461527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.461724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.461749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.461963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.461989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.462159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.462187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.462377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.462405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.462598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.462623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.462824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.462852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.463056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.463091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.463293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.463317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.463496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.463521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.463718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.463745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.463967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.463992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.464164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.464191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.464359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.464387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.464583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.464607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.464808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.464835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.465045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.465069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.465245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.465270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.465431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.465458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.465615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.465642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.465843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.465873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.466076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.466117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.466310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.466337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.466549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.466574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.466738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.466765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.466957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.466985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.467180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.467204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.467406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.467432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.467628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.467656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.467820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.467844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.468036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.468060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.468235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.468259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.310 [2024-07-14 02:21:44.468460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.310 [2024-07-14 02:21:44.468485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.310 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.468665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.468689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.468886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.468918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.469113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.469138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.469333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.469360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.469550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.469577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.469769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.469796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.470013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.470038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.470192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.470217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.470359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.470384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.470602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.470629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.470813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.470841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.471055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.471080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.471309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.471336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.471507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.471534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.471725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.471750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.471955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.471983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.472188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.472213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.472391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.472416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.472633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.472661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.472861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.472895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.473073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.473097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.473262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.473290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.473509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.473535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.473732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.473756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.473984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.474012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.474185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.474212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.474406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.474431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.474588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.474613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.474761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.474808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.475025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.475051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.475207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.475232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.475427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.475454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.475650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.475675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.475935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.475965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.476189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.476217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.476415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.476440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.476658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.476686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.476882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.476910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.477114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.477139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.477298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.477322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.311 [2024-07-14 02:21:44.477522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.311 [2024-07-14 02:21:44.477549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.311 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.477724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.477749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.477932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.477957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.478142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.478170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.478367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.478392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.478554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.478581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.478799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.478827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.479035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.479060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.479259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.479286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.479484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.479511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.479801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.479857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.480083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.480108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.480292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.480320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.480542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.480567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.480764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.480791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.480985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.481013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.481179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.481204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.481356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.481381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.481550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.481575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.481723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.481748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.481950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.481979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.482174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.482198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.482349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.482374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.482561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.482588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.482781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.482808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.482982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.483009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.483164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.483191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.483420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.483447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.483653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.483678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.483861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.483891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.484058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.484086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.484319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.484344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.484543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.484570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.484762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.484790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.484993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.485018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.485240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.312 [2024-07-14 02:21:44.485267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.312 qpair failed and we were unable to recover it. 00:34:39.312 [2024-07-14 02:21:44.485434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.485461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.485664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.485688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.485837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.485861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.486113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.486140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.486337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.486361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.486559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.486586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.486788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.486829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.487048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.487073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.487275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.487302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.487494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.487521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.487741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.487768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.487972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.487997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.488165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.488192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.488417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.488441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.488643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.488671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.488855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.488886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.489035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.489060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.489253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.489280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.489469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.489496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.489684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.489708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.489860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.489904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.490100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.490127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.490295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.490319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.490513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.490540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.490760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.490788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.490959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.490983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.491171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.491198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.491387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.491414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.491586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.491611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.491799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.491826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.492062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.492087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.492294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.492318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.492532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.492558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.492753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.492780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.492983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.493008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.493207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.493234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.493427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.493454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.493630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.493654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.493853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.493887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.494059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.494086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.494257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.313 [2024-07-14 02:21:44.494282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.313 qpair failed and we were unable to recover it. 00:34:39.313 [2024-07-14 02:21:44.494424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.494449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.494624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.494648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.494819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.494846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.495028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.495053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.495254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.495281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.495478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.495503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.495699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.495731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.495892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.495920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.496122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.496147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.496368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.496395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.496555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.496582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.496800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.496824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.496999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.497024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.497171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.497213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.497406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.497430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.497619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.497646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.497826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.497851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.498034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.498059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.498205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.498229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.498419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.498446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.498670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.498695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.498897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.498925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.499144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.499171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.499397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.499422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.499617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.499644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.499835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.499862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.500040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.500064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.500257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.500285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.500481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.500509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.500741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.500766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.500998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.501025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.501192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.501220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.501415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.501440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.501607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.501639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.501845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.501880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.502030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.502054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.502230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.502255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.502474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.502501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.502669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.502694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.502926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.502951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.503107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.314 [2024-07-14 02:21:44.503131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.314 qpair failed and we were unable to recover it. 00:34:39.314 [2024-07-14 02:21:44.503334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.503358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.503537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.503562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.503760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.503787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.503990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.504015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.504204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.504232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.504428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.504455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.504661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.504685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.504885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.504912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.505112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.505139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.505324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.505349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.505548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.505575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.505765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.505792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.505970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.505995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.506196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.506223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.506419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.506444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.506590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.506614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.506781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.506808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.507026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.507055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.507258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.507284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.507479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.507506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.507677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.507704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.507908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.507934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.508111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.508136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.508361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.508388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.508551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.508574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.508746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.508773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.508961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.508989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.509205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.509230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.509397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.509424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.509619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.509644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.509815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.509840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.510020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.510048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.510227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.510254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.510424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.510452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.510632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.510658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.510849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.510884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.511079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.511103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.511328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.511355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.511525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.511552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.511749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.511774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.511970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.315 [2024-07-14 02:21:44.511998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.315 qpair failed and we were unable to recover it. 00:34:39.315 [2024-07-14 02:21:44.512200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.512227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.512403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.512427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.512600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.512628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.512818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.512845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.513023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.513048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.513204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.513232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.513402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.513431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.513639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.513664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.513823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.513848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.514053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.514080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.514301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.514326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.514475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.514500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.514719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.514747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.514942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.514968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.515188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.515215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.515383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.515412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.515605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.515630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.515822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.515849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.516055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.516082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.516274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.516302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.516459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.516485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.516640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.516683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.516910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.516935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.517147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.517175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.517369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.517397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.517616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.517640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.517864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.517897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.518090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.518117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.518284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.518308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.518482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.518509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.518702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.518729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.518935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.518960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.519135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.519160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.519404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.519428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.519579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.519603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.519745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.519770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.519938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.519963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.316 qpair failed and we were unable to recover it. 00:34:39.316 [2024-07-14 02:21:44.520143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.316 [2024-07-14 02:21:44.520168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.317 qpair failed and we were unable to recover it. 00:34:39.341 [2024-07-14 02:21:44.520364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.341 [2024-07-14 02:21:44.520391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.341 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.520588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.520612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.520812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.520839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.521024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.521050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.521260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.521287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.521460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.521484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.521684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.521708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.521893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.521922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.522117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.522146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.522310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.522338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.522526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.522553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.522755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.522780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.523011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.523039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.523232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.523260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.523485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.523510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.523705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.523732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.523929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.523957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.524153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.524177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.524342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.524370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.524588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.524613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.524821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.524846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.525046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.525073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.525299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.525324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.525499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.525523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.525675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.525715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.525921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.525947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.526122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.526148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.526301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.526326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.526504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.526528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.526679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.526705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.526850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.526882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.527063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.527087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.527305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.527329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.527503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.527531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.527701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.527728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.527930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.527955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.528130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.528171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.528395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.528423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.528611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.528635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.528827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.528855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.529101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.529131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.342 [2024-07-14 02:21:44.529301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.342 [2024-07-14 02:21:44.529326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.342 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.529476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.529501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.529670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.529695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.529882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.529907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.530069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.530096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.530253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.530281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.530451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.530476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.530678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.530703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.530928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.530954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.531106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.531131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.531369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.531397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.531568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.531596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.531820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.531844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.532042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.532067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.532267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.532295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.532465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.532489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.532647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.532674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.532875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.532903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.533077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.533101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.533291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.533319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.533476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.533503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.533673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.533698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.533926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.533954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.534153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.534180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.534374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.534398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.534590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.534618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.534807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.534834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.535039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.535065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.535248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.535273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.535437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.535464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.535627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.535651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.535877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.535905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.536102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.536127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.536278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.536303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.536528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.536556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.536752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.536780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.536985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.537010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.537182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.537210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.537408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.537436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.537636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.537660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.537834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.537858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.343 [2024-07-14 02:21:44.538064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.343 [2024-07-14 02:21:44.538091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.343 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.538294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.538319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.538518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.538546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.538736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.538765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.538965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.538991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.539158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.539184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.539367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.539394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.539587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.539612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.539818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.539846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.540072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.540099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.540323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.540348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.540514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.540541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.540735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.540762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.540960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.540985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.541182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.541209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.541407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.541434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.541632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.541656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.541881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.541909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.542131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.542159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.542351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.542375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.542600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.542627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.542794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.542826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.543042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.543066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.543265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.543292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.543511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.543538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.543759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.543786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.543981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.544006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.544186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.544211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.544388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.544413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.544612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.544639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.544834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.544861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.545068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.545092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.545260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.545287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.545474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.545501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.545699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.545724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.545962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.545990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.546164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.546191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.546389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.546413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.546587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.546614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.546805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.546832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.547033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.547060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.344 [2024-07-14 02:21:44.547237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.344 [2024-07-14 02:21:44.547264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.344 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.547480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.547508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.547711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.547736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.547931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.547959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.548177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.548204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.548371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.548396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.548546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.548587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.548757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.548788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.548954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.548980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.549148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.549175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.549352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.549377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.549523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.549547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.549768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.549795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.549962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.549990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.550184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.550208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.550402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.550429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.550618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.550645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.550813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.550837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.551049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.551077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.551266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.551293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.551486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.551510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.551668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.551710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.551925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.551953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.552122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.552147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.552305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.552332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.552529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.552556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.552751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.552778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.552982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.553008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.553204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.553231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.553449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.553474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.553671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.553698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.553893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.553922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.554123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.554148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.554367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.554394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.554579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.554606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.554807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.554831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.555037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.555065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.555234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.555261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.555438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.555464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.555615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.555640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.555791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.555815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.555960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.345 [2024-07-14 02:21:44.555985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.345 qpair failed and we were unable to recover it. 00:34:39.345 [2024-07-14 02:21:44.556141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.556166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.556368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.556392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.556610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.556635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.556825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.556852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.557077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.557105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.557276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.557300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.557474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.557502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.557671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.557698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.557921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.557946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.558145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.558173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.558371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.558395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.558575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.558600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.558768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.558796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.558995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.559023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.559209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.559233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.559393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.559421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.559603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.559630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.559822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.559849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.560015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.560040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.560202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.560229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.560433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.560459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.560609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.560635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.560785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.560810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.560992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.561018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.561165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.561190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.561388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.561413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.561626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.561650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.561798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.561841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.562069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.562097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.562295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.562320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.562529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.562556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.562745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.562772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.562970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.346 [2024-07-14 02:21:44.562996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.346 qpair failed and we were unable to recover it. 00:34:39.346 [2024-07-14 02:21:44.563168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.563197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.563375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.563402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.563594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.563623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.563845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.563878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.564086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.564110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.564309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.564336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.564602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.564651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.564819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.564847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.565051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.565076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.565300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.565328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.565662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.565715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.565915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.565943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.566163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.566188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.566391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.566418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.566747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.566804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.567024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.567052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.567228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.567253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.567450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.567478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.567671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.567698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.567887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.567915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.568107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.568131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.568314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.568339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.568511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.568536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.568714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.568738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.568955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.568980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.569133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.569157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.569321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.569346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.569567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.569595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.569780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.569805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.569979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.570006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.570180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.570207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.570394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.570421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.570615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.570639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.570842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.570876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.571106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.571133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.571321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.571348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.571546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.571570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.571768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.571795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.572023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.572049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.572214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.572240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.572460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.347 [2024-07-14 02:21:44.572485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.347 qpair failed and we were unable to recover it. 00:34:39.347 [2024-07-14 02:21:44.572703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.572731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.572901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.572929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.573130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.573158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.573352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.573376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.573545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.573572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.573759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.573787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.573978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.574006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.574174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.574198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.574367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.574394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.574617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.574642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.574842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.574872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.575050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.575075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.575258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.575282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.575433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.575457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.575703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.575728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.575954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.575979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.576131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.576156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.576323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.576350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.576521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.576548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.576718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.576743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.576936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.576964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.577168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.577192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.577417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.577444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.577619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.577646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.577806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.577830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.578011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.578039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.578235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.578263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.578443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.578467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.578657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.578684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.578852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.578885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.579105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.579132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.579324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.579349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.579546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.579572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.579734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.579762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.579933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.579961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.580181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.580205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.580404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.580431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.580618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.580645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.580842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.580874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.581042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.581066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.581220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.581244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.348 qpair failed and we were unable to recover it. 00:34:39.348 [2024-07-14 02:21:44.581443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.348 [2024-07-14 02:21:44.581471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.581671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.581695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.581880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.581908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.582097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.582121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.582360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.582384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.582532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.582556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.582736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.582760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.582962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.582991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.583278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.583335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.583535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.583562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.583752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.583777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.583980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.584008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.584283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.584331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.584527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.584559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.584783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.584808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.585003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.585031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.585274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.585318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.585509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.585534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.585714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.585738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.585948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.585973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.586168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.586196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.586355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.586381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.586603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.586627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.586801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.586828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.587018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.587042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.587223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.587250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.587471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.587496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.587664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.587691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.587889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.587914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.588137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.588165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.588340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.588364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.588567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.588594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.588790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.588816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.589018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.589043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.589218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.589242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.589410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.589437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.589634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.589682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.589900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.589928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.590101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.590126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.590282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.590309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.590514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.590568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.349 [2024-07-14 02:21:44.590765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.349 [2024-07-14 02:21:44.590789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.349 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.590992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.591017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.591215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.591243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.591473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.591522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.591748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.591775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.591977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.592003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.592200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.592227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.592484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.592533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.592718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.592742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.592953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.592978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.593162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.593186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.593356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.593384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.593573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.593600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.593795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.593820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.594001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.594029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.594232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.594257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.594447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.594474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.594681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.594707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.594917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.594943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.595114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.595139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.595336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.595363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.595585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.595609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.595810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.595837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.596043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.596071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.596278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.596306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.596508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.596532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.596712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.596736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.596912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.596956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.597117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.597144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.597326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.597350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.597536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.597563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.597789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.597816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.598004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.598033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.598205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.598229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.598382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.598407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.350 [2024-07-14 02:21:44.598603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.350 [2024-07-14 02:21:44.598631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.350 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.598828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.598855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.599062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.599089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.599263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.599290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.599566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.599618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.599813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.599841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.600048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.600073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.600247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.600272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.600528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.600580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.600766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.600793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.601017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.601043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.601199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.601224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.601427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.601452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.601600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.601625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.601829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.601856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.602086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.602111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.602385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.602435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.602650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.602676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.602871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.602899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.603116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.603141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.603397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.603449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.603642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.603669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.603863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.603894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.604048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.604073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.604219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.604243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.604485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.604509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.604687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.604712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.604914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.604940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.605125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.605153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.605309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.605335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.605562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.605586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.605760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.605787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.605967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.605997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.606175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.606200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.606375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.606400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.606597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.606627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.606786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.606814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.607015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.607040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.607214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.607239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.607388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.607412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.607583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.607608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.607787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.351 [2024-07-14 02:21:44.607811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.351 qpair failed and we were unable to recover it. 00:34:39.351 [2024-07-14 02:21:44.607981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.608006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.608197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.608224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.608394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.608421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.608582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.608609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.608778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.608805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.608979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.609005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.609175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.609200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.609375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.609399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.609556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.609581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.609788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.609813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.610001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.610029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.610196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.610224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.610386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.610411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.610606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.610631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.610807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.610832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.611039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.611068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.611236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.611259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.611450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.611482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.611710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.611734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.611878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.611903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.612103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.612128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.612274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.612298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.612474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.612499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.612676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.612700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.612879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.612904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.613103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.613130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.613345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.613372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.613570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.613594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.613748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.613772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.613953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.613978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.614155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.614180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.614333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.614358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.614531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.614556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.614700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.614724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.614900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.614925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.615074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.615099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.615274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.615299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.615499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.615523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.615700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.615725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.615935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.615963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.616133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.616158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.352 qpair failed and we were unable to recover it. 00:34:39.352 [2024-07-14 02:21:44.616307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.352 [2024-07-14 02:21:44.616347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.616521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.616548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.616751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.616775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.616953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.616982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.617132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.617158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.617424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.617475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.617674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.617702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.617874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.617901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.618084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.618109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.618253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.618278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.618434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.618458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.618633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.618658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.618854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.618886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.619064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.619089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.619230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.619255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.619459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.619483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.619685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.619709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.619887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.619912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.620085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.620109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.620314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.620339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.620487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.620512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.620713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.620737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.620895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.620922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.621073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.621098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.621327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.621355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.621563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.621588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.621738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.621764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.621966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.621992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.622170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.622195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.622373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.622398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.622593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.622620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.622834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.622859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.623039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.623064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.623241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.623266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.623441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.623469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.623663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.623688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.623840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.623871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.624027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.624051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.624200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.624225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.624364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.624390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.624570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.624595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.353 [2024-07-14 02:21:44.624771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.353 [2024-07-14 02:21:44.624798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.353 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.624986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.625014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.625179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.625204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.625384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.625410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.625579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.625604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.625749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.625773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.625925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.625950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.626120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.626147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.626340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.626368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.626561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.626588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.626774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.626798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.626979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.627004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.627177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.627202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.627376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.627400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.627602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.627627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.627790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.627817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.628013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.628041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.628242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.628270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.628492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.628517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.628663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.628688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.628862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.628895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.629100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.629127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.629329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.629353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.629576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.629603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.629818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.629846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.630073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.630101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.630297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.630321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.630508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.630536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.630756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.630781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.630936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.630962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.631137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.631166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.631338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.631363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.631533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.631560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.631754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.631781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.631988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.632014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.632183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.632208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.354 [2024-07-14 02:21:44.632375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.354 [2024-07-14 02:21:44.632399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.354 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.632552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.632576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.632745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.632770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.632939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.632965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.633145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.633169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.633348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.633372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.633540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.633565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.633767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.633792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.633971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.633996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.634158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.634185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.634375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.634399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.634594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.634621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.634813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.634840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.635015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.635040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.635240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.635265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.635440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.635481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.635754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.635810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.636065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.636093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.636295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.636322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.636515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.636542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.636704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.636732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.636958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.636987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.637132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.637156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.637315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.637339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.637514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.637539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.637736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.637763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.637959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.637984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.638158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.638182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.638357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.638381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.638566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.638593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.638787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.638811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.638979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.639007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.639210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.639235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.639435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.639460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.639643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.639668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.639823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.639847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.640026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.640051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.640273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.640300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.640469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.640494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.640666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.640691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.640878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.640903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.355 [2024-07-14 02:21:44.641077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.355 [2024-07-14 02:21:44.641101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.355 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.641306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.641331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.641475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.641500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.641671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.641695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.641834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.641859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.642055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.642080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.642251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.642276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.642450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.642475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.642623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.642647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.642847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.642881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.643039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.643080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.643300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.643327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.643486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.643512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.643697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.643722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.643878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.643904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.644101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.644126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.644281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.644311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.644454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.644478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.644688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.644713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.644877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.644902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.645059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.645083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.645270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.645296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.645473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.645497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.645671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.645695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.645864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.645896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.646095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.646120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.646320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.646347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.646611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.646660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.646846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.646875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.647062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.647088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.647267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.647292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.647448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.647473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.647653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.647678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.647828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.647852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.648039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.648067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.648251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.648275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.648473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.648497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.648646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.648671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.648878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.648903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.649105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.649130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.649303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.356 [2024-07-14 02:21:44.649327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.356 qpair failed and we were unable to recover it. 00:34:39.356 [2024-07-14 02:21:44.649503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.649527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.649674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.649698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.649878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.649903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.650062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.650086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.650286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.650310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.650482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.650510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.650728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.650756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.650930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.650960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.651161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.651185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.651364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.651389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.651537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.651562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.651715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.651740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.651940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.651965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.652139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.652164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.652340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.652365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.652510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.652535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.652735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.652760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.652916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.652941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.653122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.653147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.653294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.653319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.653467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.653491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.653671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.653696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.653870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.653895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.654072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.654097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.654270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.654295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.654473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.654498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.654667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.654691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.654861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.654894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.655043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.655068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.655210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.655234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.655417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.655442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.655587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.655611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.655787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.655812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.655964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.655989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.656133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.656161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.656331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.656356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.656509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.656534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.656712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.656736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.656880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.656905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.657062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.657086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.657288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.357 [2024-07-14 02:21:44.657312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.357 qpair failed and we were unable to recover it. 00:34:39.357 [2024-07-14 02:21:44.657456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.657481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.657654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.657679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.657860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.657898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.658050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.658075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.658231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.658255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.658489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.658537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.658731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.658758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.658931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.658957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.659162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.659187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.659365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.659390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.659567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.659591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.659758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.659783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.659962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.659988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.660167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.660191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.660383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.660411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.660629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.660653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.660806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.660833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.661019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.661045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.661221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.661245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.661423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.661448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.661646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.661674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.661848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.661890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.662082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.662109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.662263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.662289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.662468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.662493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.662665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.662690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.662862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.662896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.663048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.663073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.663245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.663270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.663414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.663439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.663592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.663617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.663764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.663789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.663995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.664020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.664167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.664192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.664372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.664402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.664603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.664628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.664781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.664806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.664989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.665015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.665169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.665212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.665433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.665458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.665636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.665660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.665876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.665920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.666069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.666094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.358 qpair failed and we were unable to recover it. 00:34:39.358 [2024-07-14 02:21:44.666234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.358 [2024-07-14 02:21:44.666258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.666459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.666487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.666707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.666757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.666964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.666989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.667167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.667193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.667379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.667404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.667551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.667576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.667773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.667798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.668020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.668047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.668227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.668256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.668454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.668479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.668633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.668657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.668808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.668832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.669031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.669057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.669215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.669239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.669381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.669406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.669583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.669609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.669786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.669811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.669995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.670023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.670179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.670207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.670369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.670396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.670598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.670623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.670850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.670887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.671081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.671109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.671300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.671325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.671524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.671552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.671722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.671750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.671948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.671974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.672143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.672168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.672322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.672347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.672526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.672551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.672742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.672770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.672985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.673011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.673216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.673259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.673456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.673484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.673706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.673733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.673904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.673931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.674085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.674110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.674281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.359 [2024-07-14 02:21:44.674305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.359 qpair failed and we were unable to recover it. 00:34:39.359 [2024-07-14 02:21:44.674474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.674499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.674649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.674674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.674816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.674841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.674996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.675022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.675176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.675203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.675382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.675407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.675553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.675583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.675762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.675805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.675970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.675999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.676183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.676208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.676386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.676410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.676643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.676692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.676910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.676939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.677143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.677167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.677316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.677341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.677499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.677524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.677725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.677753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.677949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.677975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.678129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.678154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.678314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.678364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.678585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.678611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.678807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.678832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.678985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.679012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.679190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.679215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.679395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.679419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.679594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.679619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.679795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.679820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.679989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.680014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.680197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.680222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.680423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.680448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.680599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.680624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.680792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.680817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.680996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.681023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.681197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.681228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.681382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.681407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.681577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.681602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.681771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.681799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.681998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.682024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.682204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.682228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.682430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.682455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.682605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.682629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.682771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.682797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.682996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.683022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.683194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.683219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.683377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.360 [2024-07-14 02:21:44.683402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.360 qpair failed and we were unable to recover it. 00:34:39.360 [2024-07-14 02:21:44.683574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.683599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.683780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.683805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.683971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.683998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.684175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.684200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.684370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.684395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.684582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.684607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.684779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.684804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.684980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.685005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.685207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.685232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.685404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.685429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.685601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.685626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.685824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.685849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.686017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.686043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.686242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.686267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.686417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.686442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.686593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.686618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.686843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.686879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.687044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.687068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.687262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.687317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.687507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.687535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.687722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.687747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.687896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.687921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.688062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.688087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.688299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.688324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.688512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.688537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.688710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.688734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.688941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.688968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.689162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.689188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.689364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.689388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.689565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.689589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.689775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.689799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.689986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.690028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.690229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.690254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.690395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.690418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.690598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.690625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.690789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.690815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.691017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.691042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.691249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.691276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.691523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.691573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.691798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.691825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.692036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.692061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.692259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.692286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.692541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.692590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.692814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.692841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.361 [2024-07-14 02:21:44.693031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.361 [2024-07-14 02:21:44.693057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.361 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.693257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.693285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.693451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.693478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.693651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.693676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.693894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.693920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.694125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.694150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.694381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.694430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.694620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.694647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.694813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.694838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.695024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.695049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.695233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.695258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.695455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.695483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.695678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.695707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.695912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.695937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.696088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.696112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.696263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.696290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.696471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.696495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.696663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.696688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.696855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.696891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.697088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.697113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.697314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.697338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.697499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.697526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.697761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.697810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.698020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.698046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.698304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.698328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.698524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.698551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.698777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.698805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.698975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.699003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.699173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.699198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.699392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.699419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.699661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.699686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.699860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.699892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.700073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.700097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.700251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.700276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.700474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.700522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.700689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.700716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.700903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.700928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.701096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.701129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.701310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.701335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.701536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.701564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.701737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.701761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.701939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.701965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.702106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.702131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.702308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.702333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.362 qpair failed and we were unable to recover it. 00:34:39.362 [2024-07-14 02:21:44.702529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.362 [2024-07-14 02:21:44.702554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.702722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.702746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.702895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.702920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.703066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.703091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.703256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.703280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.703478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.703503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.703699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.703727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.703971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.703999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.704220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.704245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.704402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.704427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.704608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.704632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.704810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.704835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.704988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.705012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.705221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.705248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.705533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.705581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.705760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.705786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.705970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.705996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.706172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.706199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.706393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.706421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.706640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.706668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.706876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.706902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.707099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.707126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.707344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.707395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.707640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.707665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.707845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.707878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.708098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.708129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.708322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.708350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.708512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.708540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.708738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.708762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.708915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.708940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.709116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.709140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.709282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.709307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.709478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.709502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.709671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.709696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.709847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.709897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.710084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.710111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.710329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.710353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.710526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.363 [2024-07-14 02:21:44.710551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.363 qpair failed and we were unable to recover it. 00:34:39.363 [2024-07-14 02:21:44.710701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.710726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.710896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.710924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.711126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.711151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.711339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.711364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.711518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.711542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.711745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.711770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.711944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.711970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.712149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.712176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.712375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.712402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.712568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.712596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.712762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.712787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.712966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.712992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.713177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.713219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.713409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.713436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.713637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.713662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.713830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.713854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.714079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.714104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.714261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.714285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.714457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.714482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.714663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.714687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.714885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.714911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.715084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.715109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.715258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.715282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.715478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.715503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.715804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.715857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.716066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.716095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.716274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.716299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.716467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.716491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.716642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.716667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.716814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.716839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.364 [2024-07-14 02:21:44.717035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.364 [2024-07-14 02:21:44.717060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.364 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.717231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.717257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.717409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.717434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.717580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.717604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.717812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.717836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.718024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.718050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.718226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.718253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.718444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.718471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.718660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.718684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.718848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.718880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.719056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.719081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.719257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.719282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.719452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.719476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.719617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.719642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.719845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.719876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.720052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.720077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.720228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.720253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.720402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.720426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.720597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.720621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.720792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.720819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.721007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.721032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.721212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.721238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.721410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.721439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.721644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.721669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.721855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.721897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.722077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.722102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.722394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.722448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.722643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.722672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.722838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.722863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.723054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.723079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.723270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.723295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.723492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.723520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.723708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.723733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.723932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.723958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.724102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.724127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.724302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.724327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.724505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.724530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.724707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.724731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.724888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.724914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.725091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.725116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.725257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.725281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.365 [2024-07-14 02:21:44.725453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.365 [2024-07-14 02:21:44.725478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.365 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.725657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.725681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.725881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.725915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.726072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.726097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.726274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.726299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.726447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.726471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.726646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.726671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.726847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.726877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.727074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.727106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.727311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.727364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.727549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.727576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.727758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.727782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.727973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.728000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.728221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.728269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.728479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.728506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.728717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.728741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.728915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.728940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.729115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.729144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.729294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.729318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.729493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.729518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.729680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.729706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.729889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.729915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.730067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.730092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.730292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.730317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.730521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.730546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.730718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.730742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.730918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.730944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.731114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.731139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.731313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.731338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.731548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.731573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.731782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.731810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.732006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.732031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.732212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.732237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.732474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.732521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.732717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.732745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.732920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.732945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.733140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.733165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.733366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.733392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.733548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.733574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.733751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.733776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.733935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.366 [2024-07-14 02:21:44.733960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.366 qpair failed and we were unable to recover it. 00:34:39.366 [2024-07-14 02:21:44.734130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.734155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.734342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.734367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.734539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.734564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.734708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.734731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.734943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.734968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.735117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.735171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.735364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.735389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.735562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.735587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.735776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.735801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.735978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.736004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.736181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.736205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.736348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.736373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.736579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.736604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.736776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.736801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.736999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.737025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.737199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.737224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.737390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.737414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.737585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.737609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.737813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.737841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.738080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.738121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.738335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.738363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.738540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.738566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.738758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.738802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.739036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.739062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.739245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.739271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.739449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.739476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.739670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.739699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.739887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.739941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.740119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.740155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.740308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.740333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.740530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.740559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.740749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.740778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.740952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.740988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.367 [2024-07-14 02:21:44.741143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.367 [2024-07-14 02:21:44.741170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.367 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.741345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.741371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.741530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.741557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.741763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.741790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.741995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.742035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.742220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.742247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.742429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.742455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.742636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.742662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.742845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.742877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.743052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.743079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.743251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.743277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.743452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.743477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.743634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.743659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.743857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.743891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.744067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.744092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.744270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.744300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.744454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.744479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.744653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.744678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.744848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.744881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.745088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.745114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.745258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.745285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.745464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.745490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.745666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.745692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.745877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.745903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.746057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.746083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.746224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.746249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.746424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.746450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.746600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.746627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.746803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.746828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.747018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.747045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.747221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.747246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.747449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.747475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.747649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.747675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.747847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.747881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.748056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.748082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.748259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.748285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.748460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.748486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.748662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.748688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.748873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.748900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.749053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.749078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.749255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.749281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.368 qpair failed and we were unable to recover it. 00:34:39.368 [2024-07-14 02:21:44.749465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.368 [2024-07-14 02:21:44.749491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.749666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.749691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.749879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.749906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.750060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.750085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.750230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.750255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.750407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.750433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.750607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.750634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.750779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.750805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.750983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.751010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.751156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.751182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.751360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.751385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.751542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.751569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.751718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.751745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.751907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.751934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.752108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.752138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.752315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.752341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.752541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.752566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.752738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.752763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.752950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.752976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.753158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.753183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.753338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.753363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.753508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.753533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.753731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.753756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.753928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.753954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.754130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.754155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.754340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.754365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.754561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.754586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.754759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.754784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.754971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.754997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.755196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.755221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.755419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.755445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.755627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.755652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.755831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.755856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.756012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.756037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.756214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.756241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.756551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.756603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.756822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.756847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.757010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.757036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.757218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.757244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.757393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.757419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.757624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-14 02:21:44.757650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.369 qpair failed and we were unable to recover it. 00:34:39.369 [2024-07-14 02:21:44.757836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.757862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.758025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.758050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.758223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.758248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.758419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.758445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.758594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.758619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.758796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.758821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.758974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.759001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.759179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.759205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.759350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.759375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.759573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.759598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.759773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.759799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.759984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.760010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.760154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.760181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.760360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.760407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.760609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.760634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.760812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.760838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.760996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.761023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.761189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.761217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.761411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.761439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.761642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.761668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.761816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.761842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.762062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.762088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.762270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.762296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.762464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.762490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.762663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.762688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.762845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.762886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.763083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.763108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.763265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.763290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.763448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.763473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.763648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.763673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.763848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.763889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.764041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.764066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.764274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.764300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.764509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.764534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.764715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.764740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.764925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.764952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.765138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.370 [2024-07-14 02:21:44.765163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.370 qpair failed and we were unable to recover it. 00:34:39.370 [2024-07-14 02:21:44.765316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.765342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.765552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.765578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.765729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.765755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.765908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.765935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.766115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.766141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.766324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.766358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.766509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.766535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.766686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.766712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.766889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.766927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.767080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.767105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.767256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.767283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.767459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.767485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.767629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.767655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.767862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.767896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.768075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.768101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.768259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.768285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.768487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.768516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.768687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.768713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.768872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.768898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.769055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.769081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.769280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.769305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.769477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.769502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.769678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.769704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.769848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.769880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.770068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.770094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.770241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.770267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.770426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.770453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.770653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.770683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.770875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.770901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.771051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.771076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.771241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.771266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.771440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.771465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.771615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.771655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.771843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.771902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.772102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.772128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.772316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.772341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.772490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.772516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.772660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.371 [2024-07-14 02:21:44.772685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.371 qpair failed and we were unable to recover it. 00:34:39.371 [2024-07-14 02:21:44.772858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.372 [2024-07-14 02:21:44.772891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.372 qpair failed and we were unable to recover it. 00:34:39.372 [2024-07-14 02:21:44.773098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.372 [2024-07-14 02:21:44.773124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.372 qpair failed and we were unable to recover it. 00:34:39.372 [2024-07-14 02:21:44.773303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.372 [2024-07-14 02:21:44.773328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.372 qpair failed and we were unable to recover it. 00:34:39.372 [2024-07-14 02:21:44.773474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.773499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.773704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.773729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.773901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.773930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.774104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.774129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.774278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.774304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.774480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.774506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.774711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.774737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.774885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.774919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.775152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.775180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.775401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.775427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.775628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.775656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.775840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.775870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.776075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.776104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.776304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.776328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.776528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.776556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.776741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.776777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.777000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.777029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.777226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.777251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.777448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.777476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.777745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.777793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.777988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.778018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.778216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.778248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.778476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.778504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.778748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.778797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.779020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.779049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.779220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.779246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.779436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.779464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.779676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.779724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.779883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.779922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.780122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.780147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.780371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.780399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.780600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.780625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.780821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.780849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.781072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.781098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.781264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.781292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.781577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.781626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.781844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.781878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.782053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.782079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.782315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.782343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.782622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.782673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.373 [2024-07-14 02:21:44.782871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.373 [2024-07-14 02:21:44.782900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.373 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.783097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.783122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.783330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.783359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.783575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.783624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.783846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.783879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.784055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.784079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.784285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.784313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.784525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.784579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.784773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.784801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.785008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.785034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.785228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.785256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.785493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.785543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.785734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.785763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.785961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.785987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.786183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.786211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.786428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.786471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.786640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.786669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.786870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.786897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.787118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.787146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.787442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.787495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.787687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.787715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.787935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.787961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.788189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.788217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.788446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.788494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.788694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.788720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.788872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.788898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.789096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.789124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.789414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.789469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.789684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.789712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.789919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.789945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.790116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.790145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.790416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.790465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.790699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.790724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.790901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.790927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.791134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.791162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.791446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.791497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.791697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.791725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.791906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.791932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.792130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.792158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.792386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.792434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.792628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.792656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.374 qpair failed and we were unable to recover it. 00:34:39.374 [2024-07-14 02:21:44.792851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.374 [2024-07-14 02:21:44.792882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.793069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.793095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.793315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.793369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.793580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.793609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.793804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.793830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.794018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.794044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.794284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.794334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.794552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.794580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.794781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.794806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.794983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.795009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.795191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.795238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.795461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.795489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.795689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.795714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.795879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.795907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.796088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.796118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.796341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.796369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.796543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.796568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.796796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.796824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.797060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.797085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.797307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.797336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.797538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.797565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.797794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.797822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.798022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.798050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.798225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.798253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.798421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.798448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.798654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.798683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.798921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.798947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.799167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.799195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.799394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.799419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.799619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.799646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.799876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.799902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.800081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.800109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.800304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.800329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.800497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.800522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.800688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.800717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.800925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.800955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.801162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.801188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.801392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.801419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.801636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.801664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.801852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.801886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.802122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.375 [2024-07-14 02:21:44.802147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.375 qpair failed and we were unable to recover it. 00:34:39.375 [2024-07-14 02:21:44.802351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.802380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.802592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.802618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.802818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.802847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.803072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.803098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.803299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.803327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.803590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.803639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.803856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.803893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.804067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.804094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.804259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.804288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.804484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.804512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.804682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.804710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.804924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.804951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.805186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.805214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.805465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.805517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.805737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.805765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.805944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.805969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.806149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.806175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.806398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.806448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.806667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.806695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.806920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.806946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.807126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.807154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.807455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.807517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.807734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.807762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.807966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.807991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.808167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.808193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.808428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.808454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.808628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.808656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.808852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.808886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.809090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.809119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.809323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.809372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.809570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.809597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.809765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.809790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.809972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.809998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.810206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.810234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.810406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.810434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.810630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.810655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.810880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.810908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.811164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.811215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.811406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.811434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.811628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.376 [2024-07-14 02:21:44.811653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.376 qpair failed and we were unable to recover it. 00:34:39.376 [2024-07-14 02:21:44.811816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.811845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.812082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.812111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.812328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.812356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.812557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.812583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.812785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.812811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.813037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.813063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.813285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.813313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.813482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.813508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.813678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.813706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.813906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.813939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.814109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.814138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.814355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.814380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.814578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.814605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.814791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.814819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.815032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.815061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.815259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.815284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.815458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.815486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.815677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.815705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.815910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.815939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.816127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.816152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.816352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.816377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.816661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.816715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.816911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.816941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.817122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.817147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.817371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.817399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.817700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.817749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.817993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.818020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.818203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.818228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.818456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.818484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.818722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.818773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.819010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.819036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.819219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.819245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.819476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.819503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.819691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.819718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.819914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.819941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.377 [2024-07-14 02:21:44.820134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.377 [2024-07-14 02:21:44.820159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.377 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.820339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.820363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.820539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.820563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.820763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.820790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.821018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.821043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.821239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.821271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.821555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.821604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.821834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.821862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.822050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.822074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.822256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.822281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.822456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.822482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.822659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.822684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.822855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.822889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.823066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.823091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.823249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.823275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.823421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.823446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.823619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.823644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.823845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.823881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.824055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.824084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.824300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.824326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.824525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.824550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.824775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.824803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.824972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.825002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.825210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.825235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.825413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.825439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.825617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.825642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.825817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.825843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.826024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.826049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.826223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.826247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.826447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.826472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.826621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.826646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.826794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.826819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.827003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.827029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.827209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.827235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.827502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.827557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.827745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.827773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.827984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.828010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.828158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.828183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.828509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.828566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.828733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.828760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.828931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.828958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.378 qpair failed and we were unable to recover it. 00:34:39.378 [2024-07-14 02:21:44.829135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.378 [2024-07-14 02:21:44.829167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.829345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.829370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.829565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.829592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.829766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.829791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.829992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.830022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.830202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.830227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.830374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.830399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.830608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.830632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.830832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.830860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.831093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.831120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.831303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.831328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.831500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.831525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.831700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.831725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.832002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.832030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.832250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.832278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.832452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.832477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.832659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.832684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.832854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.832887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.833094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.833122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.833327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.833353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.833579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.833607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.833808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.833836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.834059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.834085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.834239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.834264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.834470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.834495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.834647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.834673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.834845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.834879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.835034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.835059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.835214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.835239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.835379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.835404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.835579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.835605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.835756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.835781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.835935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.835961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.836156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.836184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.836386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.836414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.836584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.836609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.836810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.836853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.837054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.837082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.837276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.837305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.837525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.837550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.837725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.379 [2024-07-14 02:21:44.837750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.379 qpair failed and we were unable to recover it. 00:34:39.379 [2024-07-14 02:21:44.837920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.837946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.838103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.838127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.838307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.838332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.838557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.838589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.838821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.838849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.839032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.839062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.839262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.839288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.839451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.839479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.839771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.839819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.840025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.840053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.840241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.840267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.840445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.840471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.840651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.840676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.840886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.840918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.841109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.841134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.841311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.841337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.841537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.841562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.841772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.841797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.841941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.841967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.842143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.842167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.842466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.842517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.842717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.842746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.842941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.842966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.843111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.843136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.843372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.843422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.843610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.843638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.843831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.843856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.844034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.844061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.844216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.844241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.844416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.844441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.844646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.844672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.844843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.844876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.845081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.845106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.845314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.845342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.845539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.845564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.845752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.845781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.845971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.846000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.846162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.846190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.846363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.846388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.846569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.846596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.846836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.380 [2024-07-14 02:21:44.846871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.380 qpair failed and we were unable to recover it. 00:34:39.380 [2024-07-14 02:21:44.847044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.847072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.847278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.847303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.847479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.847509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.847659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.847684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.847839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.847882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.848057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.848082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.848275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.848302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.848493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.848520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.848667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.848693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.848840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.848872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.849048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.849073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.849221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.849246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.849421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.849447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.849626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.849651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.849797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.849822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.849975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.850001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.850170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.850198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.850369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.850395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.850567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.850592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.850738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.850763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.850917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.850943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.851091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.851117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.851299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.851325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.851562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.851588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.851816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.851844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.852014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.852039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.852193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.852219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.852400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.852426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.852640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.852668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.852843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.852875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.853073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.853101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.853353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.853400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.853602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.853628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.853805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.853831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.854013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.381 [2024-07-14 02:21:44.854038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.381 qpair failed and we were unable to recover it. 00:34:39.381 [2024-07-14 02:21:44.854218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.854244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.854400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.854425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.854630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.854655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.854831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.854856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.855018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.855043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.855188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.855213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.855390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.855415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.855589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.855617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.855770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.855795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.855941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.855968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.856141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.856166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.856323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.856349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.856498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.856524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.856719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.856747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.856973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.856999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.857171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.857200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.857451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.857479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.857705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.857730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.857883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.857909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.858085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.858126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.858304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.858329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.858495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.858520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.858668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.858693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.858893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.858922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.859099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.859125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.859299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.859324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.859495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.859520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.859717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.859745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.859962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.859991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.860183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.860211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.860409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.860434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.860589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.860614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.860788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.860815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.860999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.861025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.861201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.861226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.861376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.861403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.861588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.861632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.861823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.861851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.862082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.862108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.862283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.862308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.382 qpair failed and we were unable to recover it. 00:34:39.382 [2024-07-14 02:21:44.862488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.382 [2024-07-14 02:21:44.862514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.862693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.862718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.862920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.862946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.863097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.863122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.863275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.863299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.863476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.863502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.863680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.863705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.863905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.863935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.864087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.864112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.864327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.864355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.864583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.864608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.864825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.864853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.865034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.865064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.865289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.865317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.865491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.865517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.865667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.865693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.865899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.865944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.866146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.866175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.866391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.866416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.866616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.866644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.866847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.866883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.867098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.867123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.867327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.867352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.867530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.867555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.867758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.867783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.867979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.868008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.868179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.868205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.868403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.868428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.868576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.868601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.868753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.868779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.868928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.868954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.869098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.869138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.869320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.869346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.869546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.869572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.869752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.869777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.869979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.870005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.870226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.870254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.870444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.870472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.870647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.870672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.870891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.870917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.871068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.383 [2024-07-14 02:21:44.871094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.383 qpair failed and we were unable to recover it. 00:34:39.383 [2024-07-14 02:21:44.871246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.871273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.871453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.871478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.871703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.871731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.871903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.871930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.872122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.872150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.872339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.872364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.872517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.872547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.872702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.872727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.872909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.872935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.873166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.873191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.873391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.873419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.873667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.873716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.873904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.873932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.874094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.874120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.874299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.874323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.874498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.874540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.874735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.874763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.874953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.874979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.875159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.875184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.875365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.875391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.875539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.875564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.875737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.875762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.875988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.876017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.876285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.876334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.876535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.876560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.876761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.876789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.876992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.877017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.877193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.877219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.877454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.877482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.877684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.877709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.877855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.877892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.878066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.878091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.878269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.878294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.878472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.878498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.878737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.878764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.878964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.878992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.879188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.879216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.879390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.879415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.879612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.879637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.879817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.879842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.880059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.384 [2024-07-14 02:21:44.880088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.384 qpair failed and we were unable to recover it. 00:34:39.384 [2024-07-14 02:21:44.880257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.880282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.880472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.880499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.880787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.880832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.881058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.881087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.881267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.881292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.881438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.881467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.881671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.881720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.881915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.881944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.882135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.882160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.882387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.882415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.882589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.882615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.882772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.882797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.882998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.883023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.883202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.883230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.883441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.883489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.883684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.883712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.883878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.883903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.884077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.884102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.884252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.884277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.884476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.884503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.884701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.884726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.884932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.884958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.885135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.885160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.885357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.885385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.885548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.885573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.885720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.885746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.885950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.885977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.886123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.886148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.886329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.886354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.886583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.886611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.886809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.886834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.887022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.887048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.887232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.887257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.887406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.887432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.887611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.385 [2024-07-14 02:21:44.887637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.385 qpair failed and we were unable to recover it. 00:34:39.385 [2024-07-14 02:21:44.887813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.887838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.887993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.888019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.888190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.888215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.888484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.888531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.888730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.888758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.888955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.888981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.889160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.889185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.889360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.889385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.889583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.889613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.889834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.889859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.890044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.890073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.890252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.890277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.890423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.890448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.890627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.890652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.890801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.890828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.891010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.891036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.891190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.891216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.891408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.891434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.891635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.891663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.891841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.891874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.892084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.892109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.892287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.892313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.892512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.892542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.892727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.892755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.892961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.892987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.893167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.893192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.893390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.893414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.893565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.893589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.893791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.893816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.893963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.893990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.894181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.894210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.894435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.894487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.894718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.894746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.894944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.894970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.895143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.895169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.895322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.895348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.895525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.895551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.895701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.895727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.895931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.895956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.386 [2024-07-14 02:21:44.896110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.386 [2024-07-14 02:21:44.896135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.386 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.896283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.896325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.896496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.896523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.896703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.896729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.896906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.896932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.897110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.897135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.897336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.897361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.897523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.897548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.897721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.897747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.897948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.897974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.898147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.898172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.898314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.898361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.898561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.898609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.898834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.898862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.899093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.899118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.899295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.899321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.899589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.899640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.899829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.899857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.900036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.900061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.900233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.900258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.900487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.900535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.900763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.900790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.900956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.900982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.901160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.901186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.901433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.901482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.901709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.901737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.901968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.901994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.902168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.902194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.902373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.902398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.902593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.902621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.902784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.902809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.903010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.903036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.903212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.903238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.903430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.903458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.903626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.903653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.903798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.903823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.904009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.904035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.904196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.904224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.904423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.904448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.904665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.904690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.904876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.904902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.387 [2024-07-14 02:21:44.905104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.387 [2024-07-14 02:21:44.905129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.387 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.905414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.905465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.905668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.905696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.905891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.905934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.906077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.906103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.906283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.906308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.906476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.906504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.906661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.906689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.906849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.906898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.907066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.907091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.907244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.907289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.907484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.907510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.907703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.907730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.907926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.907951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.908134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.908160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.908372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.908414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.908653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.908681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.908902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.908927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.909156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.909185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.909406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.909459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.909655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.909683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.909856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.909890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.910072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.910098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.910330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.910380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.910583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.910611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.910831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.910857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.911042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.911069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.911318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.911366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.911557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.911585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.911765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.911790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.911940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.911966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.912233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.912286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.912480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.912508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.912694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.912720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.912874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.912900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.913099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.913127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.913300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.913328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.913563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.913587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.913807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.913835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.914008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.914036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.914233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.388 [2024-07-14 02:21:44.914261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.388 qpair failed and we were unable to recover it. 00:34:39.388 [2024-07-14 02:21:44.914460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.914485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.914661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.914689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.914885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.914913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.915106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.915134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.915352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.915377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.915550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.915576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.915733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.915758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.915959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.915988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.916154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.916179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.916403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.916436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.916703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.916752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.916950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.916976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.917151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.917176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.917341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.917367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.917635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.917684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.917913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.917941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.918142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.918166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.918369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.918397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.918592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.918620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.918811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.918839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.919037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.919062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.919242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.919267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.919468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.919493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.919744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.919770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.919919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.919944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.920138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.920167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.920404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.920452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.920674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.920700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.920876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.920903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.921132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.921160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.921372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.921398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.921593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.921621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.921824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.921850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.922062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.922091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.922346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.922395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.922570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.922598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.922826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.922852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.923014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.923040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.389 [2024-07-14 02:21:44.923192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.389 [2024-07-14 02:21:44.923218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.389 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.923439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.923467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.923662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.923688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.923914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.923944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.924112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.924140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.924331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.924359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.924555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.924580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.924780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.924808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.924960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.924989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.925184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.925212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.925429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.925454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.925605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.925630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.925812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.925837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.926058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.926083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.926260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.926285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.926480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.926508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.926761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.926810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.927027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.927053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.927224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.927250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.927425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.927450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.927627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.927652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.927881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.927912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.928134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.928160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.928354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.928384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.928652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.928701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.928933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.928961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.929162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.929187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.929363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.929391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.929586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.929613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.929778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.929806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.930008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.930034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.930264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.930292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.930562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.930588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.930790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.930823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.931029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.931054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.931226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.931251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.931508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.931559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.931729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.390 [2024-07-14 02:21:44.931757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.390 qpair failed and we were unable to recover it. 00:34:39.390 [2024-07-14 02:21:44.931958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.931987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.932183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.932213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.932438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.932488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.932693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.932718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.932863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.932893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.933089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.933117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.933365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.933391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.933540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.933565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.933740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.933767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.933967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.933993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.934231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.934285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.934513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.934541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.934770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.934795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.934994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.935023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.935248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.935276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.935499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.935525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.935724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.935749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.935979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.936008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.936282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.936330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.936497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.936525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.936700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.936726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.936956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.936986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.937182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.937210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.937378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.937406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.937575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.937601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.937776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.937801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.937970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.938000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.938196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.938225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.938414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.938439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.938662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.938691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.938917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.938945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.939140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.939169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.939370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.939396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.939625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.939654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.939833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.939858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.940077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.940106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.940279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.940305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.940487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.940512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.940700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.940726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.940886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.940929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.391 [2024-07-14 02:21:44.941131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.391 [2024-07-14 02:21:44.941160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.391 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.941320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.941345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.941519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.941544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.941752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.941780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.941970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.941996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.942169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.942198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.942425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.942474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.942669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.942698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.942890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.942916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.943141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.943169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.943336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.943366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.943584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.943612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.943835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.943860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.944075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.944103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.944375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.944424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.944625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.944654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.944860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.944894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.945101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.945125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.945300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.945326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.945549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.945577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.945749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.945790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.945992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.946018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.946244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.946271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.946462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.946490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.946693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.946718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.946898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.946924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.947097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.947125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.947302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.947331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.947526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.947551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.947751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.947779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.947952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.947990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.948215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.948240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.948383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.948410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.948607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.948635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.948827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.948856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.949066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.949091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.949264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.949289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.949454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.949482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.949723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.949771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.949965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.949993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.950173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.950202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.392 [2024-07-14 02:21:44.950379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.392 [2024-07-14 02:21:44.950404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.392 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.950669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.950720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.950940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.950969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.951134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.951159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.951312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.951336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.951486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.951513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.951710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.951738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.951911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.951938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.952134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.952162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.952399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.952451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.952677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.952702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.952906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.952932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.953108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.953133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.953291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.953316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.953527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.953552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.953713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.953741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.953970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.953996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.954140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.954167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.954318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.954359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.954562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.954587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.954729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.954754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.954943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.954972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.955162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.955190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.955354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.955380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.955568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.955596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.955829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.955856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.956093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.956121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.956321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.956346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.956521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.956547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.956723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.956748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.956942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.956971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.957170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.957195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.957420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.957448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.957630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.957658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.957825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.957853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.958028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.958053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.958279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.958307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.958552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.958603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.958800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.958825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.959033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.959062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.959266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.959294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.393 qpair failed and we were unable to recover it. 00:34:39.393 [2024-07-14 02:21:44.959642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.393 [2024-07-14 02:21:44.959696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.394 qpair failed and we were unable to recover it. 00:34:39.394 [2024-07-14 02:21:44.959874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.394 [2024-07-14 02:21:44.959900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.394 qpair failed and we were unable to recover it. 00:34:39.394 [2024-07-14 02:21:44.960077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.394 [2024-07-14 02:21:44.960102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.394 qpair failed and we were unable to recover it. 00:34:39.394 [2024-07-14 02:21:44.960297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.394 [2024-07-14 02:21:44.960325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.394 qpair failed and we were unable to recover it. 00:34:39.394 [2024-07-14 02:21:44.960594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.394 [2024-07-14 02:21:44.960647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.394 qpair failed and we were unable to recover it. 00:34:39.394 [2024-07-14 02:21:44.960842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.394 [2024-07-14 02:21:44.960878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.394 qpair failed and we were unable to recover it. 00:34:39.394 [2024-07-14 02:21:44.961045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.394 [2024-07-14 02:21:44.961071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.394 qpair failed and we were unable to recover it. 00:34:39.394 [2024-07-14 02:21:44.961262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.394 [2024-07-14 02:21:44.961290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.394 qpair failed and we were unable to recover it. 00:34:39.394 [2024-07-14 02:21:44.961462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.394 [2024-07-14 02:21:44.961490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.394 qpair failed and we were unable to recover it. 00:34:39.394 [2024-07-14 02:21:44.961711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.394 [2024-07-14 02:21:44.961739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.394 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.961964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.961990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.962183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.962212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.962466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.962517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.962709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.962737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.962950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.962976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.963179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.963207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.963473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.963523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.963724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.963752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.963945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.963971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.964163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.964191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.964460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.964513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.964700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.964728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.964947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.964972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.965145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.965173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.965470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.965532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.965708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.965737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.965916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.965942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.966123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.966165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.966355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.966383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.966555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.966583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.966786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.966811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.967017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.967045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.967210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.967238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.967430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.967457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.967656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.967683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.967882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.967924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.968102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.968129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.968320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.968349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.968551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.968580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.968785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.968813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.969008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.969036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.969256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.969281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.969482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.969507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.678 qpair failed and we were unable to recover it. 00:34:39.678 [2024-07-14 02:21:44.969726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.678 [2024-07-14 02:21:44.969751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.969901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.969944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.970150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.970179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.970395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.970421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.970615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.970643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.970838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.970874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.971074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.971102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.971319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.971345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.971542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.971570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.971734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.971761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.971932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.971960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.972183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.972208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.972413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.972441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.972642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.972668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.972834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.972862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.973071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.973096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.973289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.973317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.973599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.973651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.973876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.973904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.974102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.974128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.974325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.974353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.974640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.974692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.974903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.974932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.975135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.975160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.975333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.975358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.975651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.975703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.975902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.975930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.976137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.976162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.976383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.976411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.976584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.976610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.976811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.976836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.977027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.977053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.977285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.977314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.977577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.977628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.977844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.977882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.978095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.978124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.978330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.978358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.978674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.978728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.978932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.978960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.979126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.979152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.979350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.679 [2024-07-14 02:21:44.979379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.679 qpair failed and we were unable to recover it. 00:34:39.679 [2024-07-14 02:21:44.979575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.979601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.979751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.979776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.979953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.979978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.980131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.980157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.980359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.980412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.980610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.980638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.980809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.980834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.981045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.981071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.981354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.981407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.981568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.981597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.981821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.981847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.982089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.982117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.982396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.982445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.982608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.982637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.982864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.982896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.983099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.983127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.983398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.983446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.983638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.983666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.983893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.983919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.984101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.984130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.984421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.984476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.984695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.984721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.984897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.984924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.985118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.985146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.985405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.985430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.985664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.985692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.985893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.985919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.986073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.986098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.986369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.986417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.986637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.986665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.986827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.986851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.987056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.987084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.987330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.987381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.987585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.987613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.987789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.987818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.987994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.988020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.988283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.988335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.988543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.988571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.988767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.988793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.988994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.680 [2024-07-14 02:21:44.989023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.680 qpair failed and we were unable to recover it. 00:34:39.680 [2024-07-14 02:21:44.989246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.989271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.989447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.989474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.989694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.989719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.989949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.989975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.990176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.990205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.990366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.990394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.990566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.990592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.990816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.990844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.991073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.991101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.991273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.991301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.991465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.991491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.991649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.991674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.991895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.991924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.992109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.992138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.992317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.992342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.992541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.992566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.992750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.992779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.992981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.993007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.993186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.993212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.993407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.993434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.993745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.993813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.994024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.994053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.994273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.994298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.994495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.994523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.994712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.994740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.994935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.994964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.995172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.995198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.995376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.995401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.995709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.995763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.995960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.995987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.996166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.996191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.996393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.996421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.996613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.996641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.996832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.996859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.997069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.997099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.997252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.997279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.997519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.997581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.997749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.997779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.997979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.998005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.998150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.998175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.681 qpair failed and we were unable to recover it. 00:34:39.681 [2024-07-14 02:21:44.998372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.681 [2024-07-14 02:21:44.998400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:44.998620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:44.998648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:44.998851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:44.998882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:44.999090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:44.999119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:44.999409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:44.999465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:44.999681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:44.999709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:44.999916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:44.999942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.000106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.000135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.000366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.000395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.000614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.000642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.000818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.000843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.001029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.001054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.001231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.001261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.001446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.001475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.001665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.001690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.001843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.001882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.002041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.002065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.002283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.002311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.002506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.002532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.002757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.002785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.002982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.003009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.003170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.003198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.003399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.003424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.003599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.003629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.003802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.003830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.004036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.004065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.004285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.004310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.004519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.004547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.004767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.004794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.005015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.005044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.005220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.005246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.005441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.005470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.005742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.005789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.682 [2024-07-14 02:21:45.006021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.682 [2024-07-14 02:21:45.006046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.682 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.006250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.006278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.006465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.006493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.006690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.006718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.006906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.006934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.007133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.007159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.007299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.007325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.007518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.007578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.007757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.007785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.007976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.008002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.008197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.008225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.008436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.008462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.008644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.008669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.008881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.008907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.009124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.009153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.009424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.009474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.009684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.009710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.009873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.009899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.010095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.010124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.010367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.010415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.010637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.010666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.010840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.010881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.011090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.011118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.011428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.011492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.011690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.011718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.011913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.011939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.012121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.012147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.012414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.012464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.012637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.012665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.012861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.012894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.013094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.013122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.013300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.013326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.013546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.013574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.013797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.013822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.014056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.014086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.014384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.014435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.014664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.014692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.014878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.014904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.015098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.015127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.015459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.015513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.683 qpair failed and we were unable to recover it. 00:34:39.683 [2024-07-14 02:21:45.015743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.683 [2024-07-14 02:21:45.015768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.015943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.015973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.016185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.016213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.016441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.016469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.016635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.016663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.016870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.016896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.017121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.017148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.017381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.017432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.017653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.017678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.017829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.017856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.018093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.018121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.018388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.018436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.018655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.018683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.018912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.018938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.019164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.019192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.019433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.019461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.019689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.019717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.019911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.019936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.020115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.020142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.020468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.020515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.020711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.020739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.020908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.020934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.021110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.021135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.021311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.021336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.021515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.021543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.021725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.021754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.021932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.021958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.022111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.022151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.022325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.022353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.022532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.022557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.022778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.022806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.022973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.023002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.023196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.023224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.023420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.023446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.023603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.023631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.023849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.023884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.024081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.024109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.024281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.024306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.024488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.024513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.024689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.024714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.024917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.684 [2024-07-14 02:21:45.024946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.684 qpair failed and we were unable to recover it. 00:34:39.684 [2024-07-14 02:21:45.025148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.025177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.025350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.025378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.025631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.025682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.025880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.025908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.026096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.026121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.026345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.026373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.026568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.026594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.026765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.026790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.026969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.026994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.027218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.027246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.027483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.027509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.027705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.027733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.027937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.027963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.028170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.028195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.028465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.028517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.028711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.028739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.028966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.028991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.029223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.029252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.029527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.029578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.029777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.029805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.030012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.030038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.030262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.030290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.030610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.030670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.030887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.030916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.031135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.031160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.031355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.031384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.031664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.031714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.031917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.031945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.032142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.032167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.032388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.032416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.032708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.032762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.032980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.033009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.033209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.033234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.033404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.033432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.033629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.033657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.033823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.033852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.034080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.034106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.034274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.034302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.034471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.034500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.034726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.034752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.034910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.685 [2024-07-14 02:21:45.034941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.685 qpair failed and we were unable to recover it. 00:34:39.685 [2024-07-14 02:21:45.035121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.035147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.035322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.035348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.035497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.035523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.035723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.035749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.035955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.035983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.036182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.036208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.036358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.036384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.036563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.036588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.036758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.036787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.036993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.037022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.037249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.037277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.037471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.037496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.037650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.037675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.037870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.037896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.038106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.038131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.038328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.038353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.038527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.038555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.038754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.038782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.038986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.039012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.039190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.039216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.039441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.039469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.039737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.039787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.040000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.040028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.040207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.040233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.040411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.040436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.040604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.040629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.040876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.040902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.041082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.041107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.041280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.041309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.041639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.041688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.041921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.041946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.042095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.042121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.042301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.042327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.042497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.042526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.042744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.686 [2024-07-14 02:21:45.042772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.686 qpair failed and we were unable to recover it. 00:34:39.686 [2024-07-14 02:21:45.042968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.042994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.043176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.043201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.043466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.043515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.043714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.043740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.043943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.043973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.044144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.044171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.044410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.044460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.044656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.044681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.044854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.044884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.045085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.045113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.045390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.045439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.045643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.045673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.045875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.045901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.046098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.046126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.046466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.046520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.046713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.046741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.046924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.046950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.047128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.047154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.047387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.047416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.047604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.047632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.047832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.047858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.048015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.048041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.048220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.048246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.048416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.048445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.048670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.048695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.048923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.048952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.049126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.049154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.049359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.049385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.049584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.049610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.049781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.049810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.049988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.050014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.050168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.050200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.050351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.050376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.050598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.050626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.050819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.050847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.051073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.051101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.051308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.051333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.051555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.051583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.051787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.051812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.052006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.052032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.687 [2024-07-14 02:21:45.052256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.687 [2024-07-14 02:21:45.052281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.687 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.052508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.052537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.052731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.052761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.052953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.052982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.053205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.053230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.053421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.053448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.053627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.053653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.053823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.053848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.054063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.054089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.054333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.054358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.054591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.054646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.054842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.054876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.055079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.055104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.055259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.055284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.055460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.055486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.055639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.055664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.055858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.055900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.056141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.056166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.056349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.056374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.056578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.056606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.056778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.056805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.056988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.057014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.057203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.057290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.057509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.057537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.057766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.057795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.058026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.058052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.058207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.058232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.058444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.058472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.058675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.058701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.058891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.058916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.059130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.059158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.059373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.059403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.059581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.059606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.059802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.059829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.060031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.060057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.060254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.060283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.060506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.060531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.060729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.060756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.060944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.060972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.061139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.061166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.061365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.061390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.688 [2024-07-14 02:21:45.061533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.688 [2024-07-14 02:21:45.061558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.688 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.061764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.061792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.061956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.061984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.062179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.062204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.062371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.062400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.062625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.062674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.062897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.062926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.063151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.063176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.063346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.063374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.063661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.063714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.063899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.063928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.064154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.064179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.064403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.064432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.064762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.064814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.065020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.065049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.065268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.065294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.065495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.065523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.065719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.065747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.065968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.065997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.066190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.066215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.066379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.066407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.066608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.066635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.066850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.066885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.067080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.067105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.067300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.067328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.067522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.067548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.067773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.067801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.068005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.068031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.068181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.068206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.068407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.068435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.068629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.068662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.068860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.068894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.069113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.069140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.069362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.069390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.069562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.069590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.069768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.069794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.069952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.069978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.070132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.070159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.070364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.070393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.070590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.070615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.070813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.070842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.689 qpair failed and we were unable to recover it. 00:34:39.689 [2024-07-14 02:21:45.071027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.689 [2024-07-14 02:21:45.071052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.071232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.071258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.071439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.071464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.071619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.071647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.071793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.071819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.071971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.071998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.072177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.072203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.072377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.072402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.072574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.072599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.072750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.072776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.072973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.072999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.073156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.073182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.073341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.073385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.073586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.073615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.073845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.073876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.074019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.074045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.074258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.074287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.074481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.074509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.074706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.074732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.074946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.074972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.075166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.075199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.075434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.075462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.075650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.075675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.075847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.075882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.076076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.076102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.076276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.076300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.076480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.076505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.076701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.076729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.076947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.076972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.077147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.077177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.077354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.077379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.077579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.077607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.077803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.077831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.078065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.078091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.078268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.078294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.078517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.078545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.078738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.078766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.078976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.079004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.079169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.079194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.079389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.690 [2024-07-14 02:21:45.079417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.690 qpair failed and we were unable to recover it. 00:34:39.690 [2024-07-14 02:21:45.079643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.079688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.079908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.079935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.080088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.080114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.080263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.080306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.080621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.080675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.080894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.080938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.081091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.081116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.081286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.081314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.081536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.081561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.081709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.081748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.081954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.081979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.082157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.082182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.082357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.082382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.082587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.082615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.082805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.082830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.082985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.083011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.083166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.083193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.083349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.083392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.083588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.083614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.083786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.083819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.084021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.084047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.084204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.084230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.084383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.084409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.084586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.084611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.084784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.084810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.084959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.084985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.085135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.085160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.085382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.085410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.085641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.085693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.085850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.085890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.086054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.086081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.086300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.086328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.086527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.086553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.086786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.086814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.086985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.087011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.087165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.087191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.691 [2024-07-14 02:21:45.087367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.691 [2024-07-14 02:21:45.087392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.691 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.087596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.087626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.087843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.087875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.088026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.088052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.088206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.088232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.088423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.088452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.088653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.088678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.088846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.088885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.089059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.089084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.089308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.089333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.089537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.089562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.089766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.089794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.089996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.090023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.090167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.090193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.090357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.090383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.090566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.090591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.090790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.090818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.091022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.091048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.091229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.091255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.091416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.091444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.091650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.091678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.091877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.091905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.092078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.092103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.092281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.092306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.092473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.092499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.092674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.092699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.092877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.092906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.093093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.093119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.093331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.093357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.093503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.093529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.093737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.093762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.093916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.093943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.094095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.094121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.094299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.094331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.094523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.094548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.094698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.094723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.094907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.094933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.095079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.095105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.095278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.095302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.095480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.095506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.095651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.095677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.692 qpair failed and we were unable to recover it. 00:34:39.692 [2024-07-14 02:21:45.095886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.692 [2024-07-14 02:21:45.095912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.096070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.096096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.096272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.096298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.096476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.096501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.096676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.096701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.096844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.096888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.097041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.097067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.097222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.097247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.097419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.097448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.097644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.097670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.097879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.097905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.098052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.098077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.098256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.098281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.098423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.098448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.098603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.098629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.098802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.098827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.098990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.099016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.099191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.099216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.099394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.099419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.099561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.099587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.099739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.099766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.099915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.099941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.100091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.100116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.100289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.100315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.100491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.100517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.100699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.100724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.100874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.100900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.101048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.101073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.101231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.101257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.101405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.101432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.101632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.101658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.101807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.101833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.101989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.102019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.102196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.102221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.102424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.102449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.102639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.102664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.102895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.102924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.103126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.103151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.103306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.103332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.103472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.693 [2024-07-14 02:21:45.103498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.693 qpair failed and we were unable to recover it. 00:34:39.693 [2024-07-14 02:21:45.103705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.103732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.103931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.103958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.104139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.104164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.104367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.104392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.104537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.104563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.104771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.104797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.104986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.105015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.105182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.105210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.105381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.105411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.105614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.105640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.105814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.105839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.105995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.106021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.106194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.106222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.106383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.106408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.106605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.106633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.106805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.106833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.107036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.107064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.107264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.107290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.107464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.107490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.107671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.107696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.107899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.107928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.108144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.108170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.108323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.108349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.108495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.108521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.108692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.108717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.108876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.108903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.109095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.109120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.109293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.109318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.109514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.109543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.109734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.109759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.109938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.109964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.110108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.110132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.110306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.110336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.110511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.110536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.110712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.110737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.110913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.110939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.111110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.111135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.111273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.111298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.111474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.111499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.111673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.111698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.694 [2024-07-14 02:21:45.111929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.694 [2024-07-14 02:21:45.111957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.694 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.112123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.112149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.112348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.112373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.112548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.112573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.112728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.112754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.112935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.112960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.113193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.113221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.113437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.113465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.113668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.113693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.113874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.113899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.114050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.114076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.114271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.114296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.114509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.114537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.114777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.114805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.115004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.115029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.115203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.115228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.115381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.115407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.115581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.115607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.115783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.115809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.116004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.116034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.116202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.116230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.116468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.116494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.116666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.116691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.116874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.116900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.117089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.117114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.117300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.117326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.117531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.117556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.117729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.117754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.117929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.117954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.118130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.118155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.118379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.118406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.118571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.118600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.118830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.118860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.119026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.119052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.119230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.119254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.119431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.695 [2024-07-14 02:21:45.119456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.695 qpair failed and we were unable to recover it. 00:34:39.695 [2024-07-14 02:21:45.119604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.119629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.119810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.119835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.120021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.120047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.120195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.120220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.120396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.120421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.120575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.120600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.120804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.120829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.121033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.121058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.121205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.121230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.121409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.121434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.121640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.121665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.121820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.121846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.122030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.122056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.122205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.122230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.122404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.122429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.122646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.122691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.122890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.122916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.123119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.123145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.123320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.123344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.123522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.123564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.123764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.123789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.123942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.123968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.124151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.124176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.124359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.124385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.124587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.124612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.124791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.124816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.124994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.125020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.125207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.125233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.125436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.125461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.125605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.125630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.125779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.125804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.125988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.126013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.126201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.126227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.126407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.126433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.126610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.126635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.126843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.126875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.127051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.127080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.127257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.127282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.127438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.127463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.127636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.127661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.696 [2024-07-14 02:21:45.127812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.696 [2024-07-14 02:21:45.127837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.696 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.128022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.128049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.128229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.128254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.128493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.128518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.128698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.128723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.128936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.128962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.129157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.129182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.129328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.129354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.129535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.129560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.129729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.129754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.129963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.129989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.130178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.130204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.130355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.130381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.130562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.130587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.130779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.130807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.131008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.131033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.131232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.131257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.131458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.131483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.131663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.131688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.131856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.131900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.132104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.132129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.132272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.132298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.132502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.132528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.132681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.132706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.132861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.132895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.133076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.133103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.133281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.133306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.133458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.133483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.133694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.133719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.133892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.133919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.134064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.134089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.134237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.134263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.134443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.134469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.134642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.134667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.134872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.134900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.135089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.135118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.135316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.135348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.135516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.135541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.135743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.135768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.135934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.135962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.136140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.136166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.697 [2024-07-14 02:21:45.136346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.697 [2024-07-14 02:21:45.136372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.697 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.136545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.136570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.136746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.136771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.136943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.136969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.137143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.137168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.137340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.137366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.137514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.137539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.137712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.137739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.137943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.137970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.138143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.138186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.138361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.138387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.138564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.138589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.138762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.138787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.138989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.139015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.139162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.139189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.139342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.139368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.139578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.139604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.139746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.139772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.139964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.139990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.140162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.140188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.140360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.140385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.140565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.140590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.140770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.140795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.140966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.140992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.141145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.141171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.141392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.141419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.141618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.141647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.141817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.141843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.142029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.142054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.142256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.142282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.142460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.142485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.142664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.142690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.142862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.142895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.143076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.143101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.143280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.143305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.143450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.143480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.143658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.143684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.143839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.143873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.144023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.144049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.144202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.698 [2024-07-14 02:21:45.144228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.698 qpair failed and we were unable to recover it. 00:34:39.698 [2024-07-14 02:21:45.144407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.144432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.144614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.144639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.144792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.144817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.145018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.145043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.145257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.145282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.145435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.145461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.145662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.145688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.145832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.145857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.146055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.146080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.146282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.146308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.146577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.146602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.146776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.146802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.146977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.147003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.147205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.147230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.147427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.147452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.147625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.147650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.147800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.147826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.148028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.148054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.148201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.148227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.148404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.148430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.148624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.148650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.148853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.148885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.149086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.149114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.149322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.149347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.149500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.149526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.149731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.149757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.149910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.149936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.150116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.150141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.150287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.150313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.150526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.150552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.150725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.150750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.150953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.150982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.151210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.151251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.151424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.151452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.699 [2024-07-14 02:21:45.151639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.699 [2024-07-14 02:21:45.151667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.699 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.151889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.151935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.152085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.152110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.152316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.152341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.152494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.152520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.152700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.152725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.152872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.152898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.153070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.153096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.153244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.153269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.153409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.153434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.153609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.153634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.153809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.153835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.154045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.154071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.154232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.154260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.154477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.154502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.154686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.154712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.154892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.154918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.155094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.155120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.155298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.155324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.155471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.155496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.155652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.155678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.155856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.155888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.156097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.156122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.156286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.156314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.156538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.156566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.156754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.156782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.156955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.156980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.157129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.157154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.157313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.157342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.157518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.157560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.157748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.157774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.157952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.157978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.158227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.158277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.158504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.158529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.158701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.158726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.158907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.158933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.159109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.159135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.159279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.159304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.159462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.159488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.159688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.159713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.700 qpair failed and we were unable to recover it. 00:34:39.700 [2024-07-14 02:21:45.159886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.700 [2024-07-14 02:21:45.159912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.160085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.160110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.160289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.160314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.160484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.160509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.160701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.160727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.160927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.160953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.161140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.161165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.161345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.161370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.161522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.161547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.161722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.161748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.161923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.161948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.162135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.162160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.162316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.162341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.162481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.162506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.162682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.162707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.162859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.162893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.163062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.163088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.163231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.163256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.163403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.163428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.163628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.163653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.163860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.163904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.164053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.164078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.164231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.164256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.164464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.164489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.164676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.164701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.164907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.164936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.165107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.165132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.165334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.165359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.165528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.165557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.165710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.165736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.165905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.165931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.166086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.166111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.166285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.166310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.166450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.166475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.166621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.166646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.166823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.166848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.167003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.167029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.167203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.167228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.167405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.167430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.167577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.167602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.701 qpair failed and we were unable to recover it. 00:34:39.701 [2024-07-14 02:21:45.167779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.701 [2024-07-14 02:21:45.167804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.167979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.168004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.168158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.168184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.168392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.168420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.168648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.168699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.168900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.168926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.169119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.169144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.169319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.169344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.169534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.169559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.169757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.169785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.169947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.169972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.170148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.170173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.170353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.170378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.170553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.170577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.170730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.170756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.170963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.170990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.171165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.171190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.171340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.171366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.171514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.171539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.171757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.171785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.171987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.172016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.172209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.172237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.172428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.172453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.172626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.172651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.172828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.172854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.173022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.173048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.173195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.173220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.173444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.173472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.173696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.173749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.173943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.173968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.174161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.174186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.174357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.174382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.174585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.174610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.174767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.174792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.174997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.175023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.175201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.175227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.175423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.175452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.175644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.175672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.175870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.175896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.176072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.176098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.702 [2024-07-14 02:21:45.176273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.702 [2024-07-14 02:21:45.176299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.702 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.176525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.176553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.176760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.176785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.176945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.176974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.177170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.177225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.177419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.177447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.177637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.177662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.177826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.177851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.178013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.178040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.178244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.178270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.178418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.178444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.178636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.178664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.178832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.178860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.179040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.179066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.179248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.179273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.179428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.179453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.179593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.179618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.179794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.179819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.179966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.179992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.180178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.180206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.180456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.180505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.180731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.180760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.180948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.180974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.181132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.181157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.181357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.181383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.181631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.181656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.181827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.181852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.182038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.182064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.182334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.182387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.182617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.182642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.182797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.182823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.183018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.183045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.183297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.183348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.183583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.183608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.183758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.183783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.183962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.183988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.184182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.184207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.184418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.703 [2024-07-14 02:21:45.184443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.703 qpair failed and we were unable to recover it. 00:34:39.703 [2024-07-14 02:21:45.184595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.184620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.184833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.184859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.185023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.185049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.185218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.185245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.185440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.185465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.185655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.185682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.185902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.185928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.186123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.186151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.186344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.186370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.186511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.186539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.186720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.186745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.186903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.186929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.187079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.187105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.187301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.187329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.187527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.187553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.187742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.187770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.187975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.188001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.188173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.188202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.188456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.188506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.188675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.188703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.188896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.188923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.189118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.189146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.189350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.189375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.189578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.189606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.189771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.189797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.189977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.190003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.190229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.190256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.190436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.190461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.190604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.190630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.190808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.190833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.190991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.191021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.191173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.191199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.191396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.191421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.191637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.191665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.191894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.191923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.192121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.192148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.704 qpair failed and we were unable to recover it. 00:34:39.704 [2024-07-14 02:21:45.192336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.704 [2024-07-14 02:21:45.192362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.192561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.192589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.192778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.192807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.193007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.193034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.193210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.193235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.193438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.193463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.193702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.193728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.193904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.193930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.194085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.194112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.194303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.194331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.194612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.194662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.194880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.194908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.195084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.195109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.195255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.195281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.195459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.195486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.195656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.195681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.195856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.195891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.196044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.196070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.196218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.196243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.196447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.196475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.196671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.196696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.196849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.196882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.197101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.197129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.197331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.197355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.197555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.197580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.197786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.197812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.197996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.198024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.198222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.198251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.198481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.198507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.198659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.198685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.198834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.198861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.199048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.199073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.199258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.199284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.199520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.199548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.199709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.199741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.199973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.199999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.200145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.200170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.200384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.200412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.200664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.200714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.200914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.200956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.705 [2024-07-14 02:21:45.201167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.705 [2024-07-14 02:21:45.201193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.705 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.201423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.201450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.201680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.201705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.201861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.201904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.202087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.202112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.202289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.202314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.202621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.202675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.202891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.202920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.203128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.203153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.203356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.203386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.203553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.203580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.203744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.203773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.203990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.204016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.204216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.204245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.204435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.204463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.204665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.204692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.204894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.204920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.205086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.205115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.205291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.205318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.205507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.205535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.205735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.205761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.205924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.205951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.206153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.206195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.206365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.206392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.206550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.206575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.206797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.206825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.207026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.207054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.207222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.207251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.207449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.207475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.207641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.207670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.207894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.207923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.208119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.208147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.208356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.208382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.208620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.208649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.208840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.208878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.209069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.209097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.209273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.209300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.209495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.209523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.209824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.209882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.210073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.210101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.210274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.210301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.706 [2024-07-14 02:21:45.210453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.706 [2024-07-14 02:21:45.210481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.706 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.210676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.210704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.210900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.210929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.211127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.211152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.211338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.211363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.211548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.211573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.211776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.211804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.211980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.212007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.212237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.212265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.212539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.212590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.212761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.212789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.212957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.212982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.213210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.213239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.213529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.213585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.213781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.213809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.213985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.214011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.214187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.214212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.214529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.214581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.214749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.214774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.214975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.215000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.215187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.215215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.215527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.215585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.215784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.215809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.215991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.216017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.216209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.216237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.216464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.216489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.216663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.216688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.216842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.216872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.217088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.217116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.217357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.217402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.217584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.217609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.217804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.217830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.218026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.218053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.218212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.218240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.218391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.218417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.218561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.218587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.218737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.218780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.218944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.218973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.219187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.219212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.219421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.219447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.219674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.219702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.707 qpair failed and we were unable to recover it. 00:34:39.707 [2024-07-14 02:21:45.219875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.707 [2024-07-14 02:21:45.219904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.220073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.220102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.220277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.220303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.220516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.220558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.220780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.220808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.220999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.221029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.221243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.221269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.221446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.221472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.221711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.221739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.221932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.221960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.222163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.222190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.222386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.222415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.222632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.222660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.222847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.222884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.223064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.223090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.223287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.223316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.223598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.223651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.223876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.223904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.224138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.224163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.224326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.224353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.224532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.224561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.224742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.224767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.224950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.224976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.225176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.225204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.225444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.225472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.226379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.226412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.226617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.226644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.226830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.226859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.227078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.227106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.227296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.227325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.227527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.227553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.227748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.227776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.227975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.228009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.228229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.228258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.228459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-07-14 02:21:45.228485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.708 qpair failed and we were unable to recover it. 00:34:39.708 [2024-07-14 02:21:45.228659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.228687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.228859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.228896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.229101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.229127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.229278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.229303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.229502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.229530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.229801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.229853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.230087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.230116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.230311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.230337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.230531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.230559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.230756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.230782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.230979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.231008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.231190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.231215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.231410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.231438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.231688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.231740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.231934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.231963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.232140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.232165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.232339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.232365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.232616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.232669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.232899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.232927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.233150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.233175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.233413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.233442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.233680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.233729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.233925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.233953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.234132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.234158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.234438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.234467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.234752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.234812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.235002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.235031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.235207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.235232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.235413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.235439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.235712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.235740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.235937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.235966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.236224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.236250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.236441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.236469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.236801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.236858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.237066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.237091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.237252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.237277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.237488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.237517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.237781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.237815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.238015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.238044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.238267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.238292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.238468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.709 [2024-07-14 02:21:45.238497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.709 qpair failed and we were unable to recover it. 00:34:39.709 [2024-07-14 02:21:45.238747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.238798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.238996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.239024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.239201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.239225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.239401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.239426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.239627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.239653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.239833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.239861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.240039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.240064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.240258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.240286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.240484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.240509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.240680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.240705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.240902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.240929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.241132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.241160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.241349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.241377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.241595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.241620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.241795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.241819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.242000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.242026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.242177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.242219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.242416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.242444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.242666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.242691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.242933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.242960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.243111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.243137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.243329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.243357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.243581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.243607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.243811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.243839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.244046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.244071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.244279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.244307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.244478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.244503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.244713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.244742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.244950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.244979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.245183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.245208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.245380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.245405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.245665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.245693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.245861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.245895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.246118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.246146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.246368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.246393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.246621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.246649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.246877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.246910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.247130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.247159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.247382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.247407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.247608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.247636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.710 [2024-07-14 02:21:45.247803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.710 [2024-07-14 02:21:45.247832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.710 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.248033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.248061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.248239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.248265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.248460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.248488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.248654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.248683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.248879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.248908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.249104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.249130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.249309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.249335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.249546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.249609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.249809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.249838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.250067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.250093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.250260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.250289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.250547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.250599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.250837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.250874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.251043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.251068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.251218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.251243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.251433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.251459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.251692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.251720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.251917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.251943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.252115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.252144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.252445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.252497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.252657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.252686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.252877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.252903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.253101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.253130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.253327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.253354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.253551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.253579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.253802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.253827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.254015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.254041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.254193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.254219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.254396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.254424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.254627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.254652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.254825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.254853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.255036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.255062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.255260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.255288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.255550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.255576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.255754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.255784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.255980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.256011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.256168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.256193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.256366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.256393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.256565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.256592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.256744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.256770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.711 [2024-07-14 02:21:45.256966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.711 [2024-07-14 02:21:45.256995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.711 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.257197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.257222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.257397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.257422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.257653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.257679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.257912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.257940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.258141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.258167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.258359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.258388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.258681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.258732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.258928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.258957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.259178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.259204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.259407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.259436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.259609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.259635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.259835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.259863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.260098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.260124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.260295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.260323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.260554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.260604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.260801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.260830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.261038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.261065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.261265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.261293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.261595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.261645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.261882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.261911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.262108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.262134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.262313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.262341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.262569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.262617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.262796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.262822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.262981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.263007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.263204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.263234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.263543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.263613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.263813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.263842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.264046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.264073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.264307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.264336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.264545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.264574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.264800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.264825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.265015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.265041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.265200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.265225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.712 [2024-07-14 02:21:45.265440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.712 [2024-07-14 02:21:45.265473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.712 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.265678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.265704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.265906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.265932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.266086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.266112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.266287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.266313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.266493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.266520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.266733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.266782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.266984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.267010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.267178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.267220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.267389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.267414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.267596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.267622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.267795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.267821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.268019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.268045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.268195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.268220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.268403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.268429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.268723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.268784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.268999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.269025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.269199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.269223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.269416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.269444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.269643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.269668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.269820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.269845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.270032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.270072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.270262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.270289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.270464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.270508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.270680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.270724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.270883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.270910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.271086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.271111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.271316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.271348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.271548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.271591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.271773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.271798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.271972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.271998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.272139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.272165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.272358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.272383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.272588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.272632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.272778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.272803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.272982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.273008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.273207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.273251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.273482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.273524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.273706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.273732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.273909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.273935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.274110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.274136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.713 [2024-07-14 02:21:45.274312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.713 [2024-07-14 02:21:45.274355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.713 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.274589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.274632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.274814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.274841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.275000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.275028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.275198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.275223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.275389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.275432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.275633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.275678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.275836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.275862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.276041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.276066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.276269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.276312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.276637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.276686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.276873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.276899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.277102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.277128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.277357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.277400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.277642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.277692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.277877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.277904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.278074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.278099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.278328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.278370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.278646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.278695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.278905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.278931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.279132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.279157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.279329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.279373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.279576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.279618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.279826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.279851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.280028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.280053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.280240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.280282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.280452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.280499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.280649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.280677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.280856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.280887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.281071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.281097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.281269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.281295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.281472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.281516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.281716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.281741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.281909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.281935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.282086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.282113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.282318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.282360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.282508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.282534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.282712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.282738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.282942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.282968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.283171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.283215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.283496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.283546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.714 qpair failed and we were unable to recover it. 00:34:39.714 [2024-07-14 02:21:45.283745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.714 [2024-07-14 02:21:45.283771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.283942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.283969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.284185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.284214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.284467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.284510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.284691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.284717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.284898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.284924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.285103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.285129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.285359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.285402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.285551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.285578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.285782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.285808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.285991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.286016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.286210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.286252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.286456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.286498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.286678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.286704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.286883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.286908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.287081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.287107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.287282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.287325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.287552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.287596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.287773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.287798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.288006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.288032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.288234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.288278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.288559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.288616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.288793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.288818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.289024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.289051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.289260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.289303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.289530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.289582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.289803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.289830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.290033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.290059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.290288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.290331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.290541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.290585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.290743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.290770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.290925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.290952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.291096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.291121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.291297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.291323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.291545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.291589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.291773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.291799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.292005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.292031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.292204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.292248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.292460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.292485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.292670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.292696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.292877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.715 [2024-07-14 02:21:45.292904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.715 qpair failed and we were unable to recover it. 00:34:39.715 [2024-07-14 02:21:45.293058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.293084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.293307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.293350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.293504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.293531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.293707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.293733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.293934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.293960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.294134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.294161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.294359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.294385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.294607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.294650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.294853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.294886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.295043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.295068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.295270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.295313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.295523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.295565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.295743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.295768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.295921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.295948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.296124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.296150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.296364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.296390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.296588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.296631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.296781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.296807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.296976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.297002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.297201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.297244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.297441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.297485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.297638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.297663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.297841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.297872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.298051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.298077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.298311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.298358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.298563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.298606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.298751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.298776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.298926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.298951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.299151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.299177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.299372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.299416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.299645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.299689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.299874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.299900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.300054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.300079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.300278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.300325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.300524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.300566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.716 [2024-07-14 02:21:45.300715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.716 [2024-07-14 02:21:45.300740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.716 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.300914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.300939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.301085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.301110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.301346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.301388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.301618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.301660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.301874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.301900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.302074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.302099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.302324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.302367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.302568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.302610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.302765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.302791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.302969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.302995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.303162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.303190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.303388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.303430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.303662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.303704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.303854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.303886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.304067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.304093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.304293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.304340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.304545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.304588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.304739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.304764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.304917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.304944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.305098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.305126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.305332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.305375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.305576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.305620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.305774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.305800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.305977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.306004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.306232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.306275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.306484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.306527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.306698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.306724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.306935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.306961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.307136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.307166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.307371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.307415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.307590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.307616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.307790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.307817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.307971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.307997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.308191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.308238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.308442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.308487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.308662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.308688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.308871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.308897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.309083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.309109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.309302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.309331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.309574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.309617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-14 02:21:45.309764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.717 [2024-07-14 02:21:45.309791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.309966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.310010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.310211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.310255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.310456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.310500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.310704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.310730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.310905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.310931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.311087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.311113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.311321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.311349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.311558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.311586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.311788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.311814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.312050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.312094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.312305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.312332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.312526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.312569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.312773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.312799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.312972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.313016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.313227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.313271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.313535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.313587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.313758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.313784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.313986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.314028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.314215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.314259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.314466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.314509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.314711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.314736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.314960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.315003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.315209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.315252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.315452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.315495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.315672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.315697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.315879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.315906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.316133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.316175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.316443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.316490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.316694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.316738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.316943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.316986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.317135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.317162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.317398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.317441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.317682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.317724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.317933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.317977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.318210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.318253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.318557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.318606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.318785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.318810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.319009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.319056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.319258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.319300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.319524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.319567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-14 02:21:45.319745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.718 [2024-07-14 02:21:45.319770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.319983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.320027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.320235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.320262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.320486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.320528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.320726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.320751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.320907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.320934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.321126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.321168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.321370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.321413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.321624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.321666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.321848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.321879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.322081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.322123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.322321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.322364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.322558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.322601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.322754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.322781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.322981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.323025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.323250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.323293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.323492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.323534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.323708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.323733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.323927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.323956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.324149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.324193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.324394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.324422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.324590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.324617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.324822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.324847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.325059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.325088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.325309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.325352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.325587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.325630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.325807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.325832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.326019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.326068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.326298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.326341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.326546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.326589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.326784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.326810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.326971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.326997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.327198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.327241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.327445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.327490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.327660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.327686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.327862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.327895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.328096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.328145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.328352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.328395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.328594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.328638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.328837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.328863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.329045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.719 [2024-07-14 02:21:45.329089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-14 02:21:45.329322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.329364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.329593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.329636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.329815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.329841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.330022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.330048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.330250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.330292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.330489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.330533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.330720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.330745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.330948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.330992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.331199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.331242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.331418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.331460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.331628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.331671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.331876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.331903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.332079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.332104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.332303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.332348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.332553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.332596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.332746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.332771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.332926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.332952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.333152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.333196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.333459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.333508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.333722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.333747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.333919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.333948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.334155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.334184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.334397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.334439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.334640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.334683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.334858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.334889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.335035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.335060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.335281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.335327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.335519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.335563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.335715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.335740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.335964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.336008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.336206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.336235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.336486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.336529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.336686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.336711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.336856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.336888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.337105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.337147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.337321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.337363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.337539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.337583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.337790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.337816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.338013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.338039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.338236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.338280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.338469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.720 [2024-07-14 02:21:45.338496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.720 qpair failed and we were unable to recover it. 00:34:39.720 [2024-07-14 02:21:45.338677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.338702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.338888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.338914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.339092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.339118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.339309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.339351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.339549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.339592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.339771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.339796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.340006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.340049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.340220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.340265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.340498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.340541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.340718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.340743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.340925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.340952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.341189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.341233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.341439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.341482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.341682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.341724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.341914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.341942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.342140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.342184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.342389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.342432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.342610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.342635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.342787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.342813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.343024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.343052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.343267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.343309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.343513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.343557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.343760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.343786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.721 [2024-07-14 02:21:45.343985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.721 [2024-07-14 02:21:45.344027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.721 qpair failed and we were unable to recover it. 00:34:39.722 [2024-07-14 02:21:45.344257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.722 [2024-07-14 02:21:45.344299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.722 qpair failed and we were unable to recover it. 00:34:39.722 [2024-07-14 02:21:45.344473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.722 [2024-07-14 02:21:45.344518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.722 qpair failed and we were unable to recover it. 00:34:39.722 [2024-07-14 02:21:45.344662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.722 [2024-07-14 02:21:45.344691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.722 qpair failed and we were unable to recover it. 00:34:39.999 [2024-07-14 02:21:45.344846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.999 [2024-07-14 02:21:45.344879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.999 qpair failed and we were unable to recover it. 00:34:39.999 [2024-07-14 02:21:45.345082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.999 [2024-07-14 02:21:45.345126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.999 qpair failed and we were unable to recover it. 00:34:39.999 [2024-07-14 02:21:45.345375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.999 [2024-07-14 02:21:45.345426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.999 qpair failed and we were unable to recover it. 00:34:39.999 [2024-07-14 02:21:45.345662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.999 [2024-07-14 02:21:45.345705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.999 qpair failed and we were unable to recover it. 00:34:39.999 [2024-07-14 02:21:45.345856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.999 [2024-07-14 02:21:45.345890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.999 qpair failed and we were unable to recover it. 00:34:39.999 [2024-07-14 02:21:45.346112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.999 [2024-07-14 02:21:45.346138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.999 qpair failed and we were unable to recover it. 00:34:39.999 [2024-07-14 02:21:45.346349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.999 [2024-07-14 02:21:45.346395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.999 qpair failed and we were unable to recover it. 00:34:39.999 [2024-07-14 02:21:45.346608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.999 [2024-07-14 02:21:45.346651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.999 qpair failed and we were unable to recover it. 00:34:39.999 [2024-07-14 02:21:45.346839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.999 [2024-07-14 02:21:45.346896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.999 qpair failed and we were unable to recover it. 00:34:39.999 [2024-07-14 02:21:45.347103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.999 [2024-07-14 02:21:45.347130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.999 qpair failed and we were unable to recover it. 00:34:39.999 [2024-07-14 02:21:45.347300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.999 [2024-07-14 02:21:45.347343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.999 qpair failed and we were unable to recover it. 00:34:39.999 [2024-07-14 02:21:45.347545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.999 [2024-07-14 02:21:45.347590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.999 qpair failed and we were unable to recover it. 00:34:39.999 [2024-07-14 02:21:45.347775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.999 [2024-07-14 02:21:45.347802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:39.999 qpair failed and we were unable to recover it. 00:34:39.999 [2024-07-14 02:21:45.348039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.999 [2024-07-14 02:21:45.348084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.348259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.348287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.348464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.348509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.348687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.348712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.348966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.349010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.349233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.349277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.349460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.349509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.349684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.349710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.349859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.349892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.350095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.350140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.350310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.350354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.350586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.350629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.350802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.350842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.351011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.351037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.351219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.351245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.351405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.351447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.351667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.351694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.351895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.351941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.352096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.352120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.352351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.352379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.352753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.352802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.352972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.352997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.353181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.353206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.353376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.353401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.353728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.353780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.354003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.354028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.354187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.354212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.354410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.354437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.354675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.354703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.354919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.354944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.355099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.355124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.355279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.355322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.355577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.355604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.355794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.355821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.356001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.356027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.356252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.356280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.356503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.356531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.356787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.356839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.357024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.357049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.357222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.357251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.000 qpair failed and we were unable to recover it. 00:34:40.000 [2024-07-14 02:21:45.357422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.000 [2024-07-14 02:21:45.357447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.357726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.357797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.357983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.358008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.358185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.358209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.358355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.358380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.358552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.358577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.358766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.358793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.358968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.358995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.359246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.359272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.359442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.359467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.359664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.359691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.359884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.359909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.360106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.360131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.360359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.360410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.360602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.360629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.360822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.360851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.361058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.361084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.361280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.361310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.361483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.361511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.361680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.361707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.361883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.361910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.362084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.362109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.362337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.362364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.362557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.362598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.362789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.362816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.362996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.363021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.363222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.363252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.363463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.363491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.363776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.363804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.364014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.364040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.364219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.364244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.364395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.364421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.364627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.364667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.364872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.364915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.365073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.365098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.365330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.365372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.365653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.365700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.365873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.365917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.366093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.366118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.366350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.366378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.366592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.366646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.001 qpair failed and we were unable to recover it. 00:34:40.001 [2024-07-14 02:21:45.366845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.001 [2024-07-14 02:21:45.366875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.367029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.367054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.367230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.367255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.367447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.367474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.367698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.367747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.367952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.367978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.368157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.368181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.368383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.368408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.368555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.368580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.368781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.368806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.369033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.369058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.369212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.369236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.369389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.369418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.369596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.369620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.369820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.369844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.370003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.370029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.370224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.370251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.370411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.370440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.370633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.370661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.370882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.370907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.371081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.371106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.371257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.371281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.371429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.371453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.371601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.371626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.371802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.371826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.372025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.372051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.372231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.372260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.372475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.372500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.372667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.372697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.372864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.372918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.373072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.373097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.373274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.373299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.373550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.373602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.373835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.373860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.374074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.374099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.374279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.374304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.374472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.374496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.374720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.374747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.374932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.374958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.375111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.375137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.375344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.375369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.002 qpair failed and we were unable to recover it. 00:34:40.002 [2024-07-14 02:21:45.375603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.002 [2024-07-14 02:21:45.375631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.375824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.375851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.376030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.376055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.376254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.376279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.376479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.376503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.376657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.376698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.376891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.376916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.377119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.377144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.377315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.377339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.377516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.377541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.377744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.377768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.377927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.377954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.378110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.378135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.378307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.378332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.378530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.378554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.378753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.378778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.378930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.378955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.379127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.379151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.379292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.379317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.379493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.379518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.379671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.379695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.379843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.379885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.380061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.380086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.380338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.380363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.380535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.380559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.380767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.380794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.381018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.381043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.381244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.381287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.381554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.381582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.381798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.381825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.382106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.382131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.382305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.382329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.382497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.382521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.382696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.382721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.382878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.382902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.003 qpair failed and we were unable to recover it. 00:34:40.003 [2024-07-14 02:21:45.383058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.003 [2024-07-14 02:21:45.383084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.383260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.383285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.383485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.383509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.383652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.383677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.383844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.383881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.384057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.384082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.384242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.384267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.384450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.384475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.384729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.384780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.385010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.385035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.385210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.385235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.385408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.385433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.385691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.385715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.385936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.385961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.386117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.386141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.386309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.386334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.386477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.386501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.386703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.386728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.386893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.386919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.387077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.387102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.387292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.387320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.387538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.387565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.387787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.387814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.387991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.388017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.388190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.388214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.388415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.388440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.388712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.388737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.388911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.388937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.389138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.389163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.389316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.389341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.389540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.389564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.389714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.389742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.389888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.389914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.390118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.390162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.390356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.390383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.390549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.390573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.390750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.390774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.390922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.390947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.391206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.391233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.391430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.391455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.004 qpair failed and we were unable to recover it. 00:34:40.004 [2024-07-14 02:21:45.391632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.004 [2024-07-14 02:21:45.391657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.391833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.391857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.392016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.392042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.392251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.392276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.392452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.392476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.392650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.392675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.392847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.392880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.393025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.393051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.393193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.393218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.393393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.393417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.393586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.393610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.393789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.393814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.393972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.393997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.394149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.394174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.394328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.394353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.394553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.394577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.394734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.394758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.394930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.394956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.395146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.395173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.395379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.395404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.395604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.395628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.395773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.395814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.396032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.396057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.396259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.396284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.396566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.396624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.396813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.396840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.397038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.397063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.397215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.397239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.397417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.397442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.397618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.397643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.397819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.397843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.398021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.398047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.398204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.398229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.398403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.398428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.398572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.398597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.398737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.398761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.398958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.398983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.399134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.399159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.399329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.399353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.399564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.399589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.399765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.005 [2024-07-14 02:21:45.399789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.005 qpair failed and we were unable to recover it. 00:34:40.005 [2024-07-14 02:21:45.399975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.400000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.400179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.400206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.400476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.400500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.400675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.400700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.400876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.400902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.401106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.401133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.401354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.401378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.401545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.401570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.401747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.401774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.401993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.402021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.402246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.402270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.402595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.402648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.402838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.402874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.403046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.403073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.403239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.403264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.403466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.403490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.403646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.403670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.403818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.403843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.404008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.404038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.404184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.404209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.404393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.404420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.404580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.404607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.404800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.404824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.405006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.405031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.405199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.405223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.405404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.405429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.405636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.405660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.405831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.405856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.406038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.406063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.406233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.406258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.406424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.406449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.406624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.406649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.406798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.406822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.407029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.407055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.407233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.407258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.407410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.407435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.407609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.407634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.407808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.407833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.407994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.408019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.408198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.408223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.408422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.006 [2024-07-14 02:21:45.408447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.006 qpair failed and we were unable to recover it. 00:34:40.006 [2024-07-14 02:21:45.408624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.408649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.408818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.408843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.409001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.409026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.409174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.409199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.409377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.409407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.409558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.409583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.409757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.409782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.409953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.409979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.410248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.410276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.410470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.410495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.410697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.410721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.410873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.410898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.411078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.411103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.411251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.411276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.411454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.411483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.411688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.411713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.411892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.411917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.412102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.412127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.412272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.412297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.412441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.412466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.412644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.412669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.412843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.412874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.413019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.413046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.413219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.413244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.413395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.413420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.413597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.413622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.413770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.413795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.413971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.413996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.414176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.414203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.414407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.414434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.414631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.414658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.414836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.414870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.415027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.415051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.415226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.415252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.415431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.415458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.415610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.415635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.415811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.415836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.416015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.416041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.416187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.416213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.416365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.416390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.007 qpair failed and we were unable to recover it. 00:34:40.007 [2024-07-14 02:21:45.416568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.007 [2024-07-14 02:21:45.416593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.416843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.416877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.417057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.417082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.417289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.417316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.417511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.417535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.417716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.417741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.417932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.417961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.418156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.418184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.418384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.418409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.418582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.418607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.418797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.418825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.419015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.419044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.419240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.419267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.419434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.419459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.419608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.419634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.419838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.419872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.420097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.420125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.420349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.420373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.420567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.420615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.420788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.420815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.421011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.421036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.421250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.421275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.421473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.421501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.421718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.421745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.421913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.421941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.422163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.422187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.422410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.422458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.422685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.422712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.422914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.422942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.423160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.423185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.423434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.423459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.423646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.423671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.423890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.423932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.424159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.424184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.424385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.424413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.424607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.008 [2024-07-14 02:21:45.424634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.008 qpair failed and we were unable to recover it. 00:34:40.008 [2024-07-14 02:21:45.424837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.424862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.425017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.425042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.425196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.425221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.425370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.425394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.425589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.425616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.425841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.425873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.426077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.426104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.426302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.426330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.426516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.426543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.426716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.426740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.426922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.426948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.427129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.427154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.427353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.427380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.427574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.427598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.427863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.427897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.428127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.428155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.428328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.428355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.428548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.428572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.428773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.428800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.429030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.429055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.429253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.429281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.429483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.429507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.429759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.429808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.429976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.430009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.430193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.430217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.430417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.430441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.430616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.430673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.430856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.430892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.431061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.431088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.431303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.431327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.431591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.431618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.431836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.431863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.432078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.432105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.432294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.432318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.432546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.432599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.432792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.432819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.433017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.433043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.433199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.433223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.433461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.433509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.433736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.433764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.433980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.009 [2024-07-14 02:21:45.434008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.009 qpair failed and we were unable to recover it. 00:34:40.009 [2024-07-14 02:21:45.434213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.434238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.434418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.434443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.434644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.434669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.434896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.434924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.435145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.435169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.435319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.435344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.435495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.435520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.435719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.435744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.435949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.435974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.436144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.436175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.436396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.436423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.436644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.436671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.436883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.436908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.437195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.437252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.437441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.437469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.437726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.437750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.437956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.437981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.438161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.438189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.438354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.438381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.438577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.438602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.438784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.438808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.439010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.439038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.439229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.439256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.439459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.439485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.439632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.439657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.439888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.439916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.440118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.440146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.440303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.440330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.440501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.440526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.440724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.440752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.440946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.440975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.441196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.441221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.441427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.441451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.441710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.441737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.441966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.441994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.442181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.442209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.442408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.442432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.442649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.442700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.442890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.442918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.443125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.443149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.443347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.010 [2024-07-14 02:21:45.443371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.010 qpair failed and we were unable to recover it. 00:34:40.010 [2024-07-14 02:21:45.443586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.443643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.443848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.443879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.444037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.444062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.444244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.444269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.444528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.444553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.444728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.444756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.444976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.445002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.445170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.445195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.445419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.445471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.445694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.445721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.445994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.446022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.446242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.446266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.446568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.446623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.446814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.446841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.447038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.447062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.447246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.447271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.447468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.447495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.447654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.447681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.447880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.447907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.448104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.448129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.448381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.448433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.448651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.448678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.448887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.448915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.449125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.449150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.449369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.449419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.449589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.449616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.449802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.449829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.450011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.450036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.450211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.450235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.450411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.450439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.450652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.450680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.450845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.450876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.451053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.451081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.451296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.451324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.451550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.451574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.451729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.451754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.451947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.451979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.452174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.452202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.452422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.452449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.452675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.452700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.452882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.452912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.011 qpair failed and we were unable to recover it. 00:34:40.011 [2024-07-14 02:21:45.453183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.011 [2024-07-14 02:21:45.453211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.453400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.453429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.453624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.453649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.453823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.453848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.454009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.454052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.454220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.454249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.454447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.454472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.454725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.454773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.454976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.455001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.455184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.455208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.455359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.455384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.455538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.455562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.455765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.455789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.456000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.456029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.456191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.456217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.456446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.456495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.456702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.456730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.456899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.456927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.457105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.457130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.457305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.457330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.457507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.457535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.457759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.457787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.457957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.457985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.458181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.458208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.458375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.458402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.458561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.458590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.458778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.458802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.458969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.458997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.459188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.459215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.459433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.459460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.459652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.459677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.459891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.459917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.460074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.460098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.460245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.460287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.460507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.460531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.012 [2024-07-14 02:21:45.460725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.012 [2024-07-14 02:21:45.460773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.012 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.461008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.461033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.461212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.461236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.461408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.461433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.461581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.461606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.461787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.461814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.462051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.462077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.462257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.462282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.462431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.462455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.462635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.462660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.462906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.462931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.463083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.463109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.463372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.463420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.463645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.463673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.463831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.463863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.464087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.464112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.464337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.464364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.464589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.464614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.464828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.464855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.465089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.465113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.465314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.465341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.465535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.465563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.465754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.465782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.465982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.466008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.466200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.466228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.466418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.466443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.466619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.466644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.466798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.466823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.467026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.467055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.467230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.467258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.467451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.467479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.467680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.467705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.467904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.467942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.468101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.468128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.468291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.468319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.468510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.468535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.468692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.468720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.468890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.468920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.469139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.469168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.469330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.469354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.469547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.013 [2024-07-14 02:21:45.469575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.013 qpair failed and we were unable to recover it. 00:34:40.013 [2024-07-14 02:21:45.469741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.469768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.469970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.469998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.470173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.470198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.470353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.470378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.470571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.470599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.470784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.470811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.471020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.471045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.471253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.471280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.471465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.471493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.471685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.471714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.471889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.471924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.472133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.472162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.472346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.472372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.472565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.472593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.472792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.472818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.472971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.472997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.473151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.473178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.473407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.473435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.473636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.473662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.473928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.473958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.474154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.474184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.474403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.474432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.474643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.474669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.474901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.474929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.475111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.475140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.475334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.475363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.475570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.475596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.475798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.475826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.476045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.476071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.476242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.476269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.476600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.476666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.476890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.476917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.477097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.477122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.477364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.477393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.477603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.477629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.477826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.477850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.478119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.478161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.478345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.478374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.478602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.478631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.478838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.478876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.479042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.014 [2024-07-14 02:21:45.479067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.014 qpair failed and we were unable to recover it. 00:34:40.014 [2024-07-14 02:21:45.479281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.479312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.479541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.479570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.479736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.479764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.479965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.479992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.480216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.480245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.480475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.480501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.480894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.480953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.481170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.481199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.481396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.481424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.481638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.481667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.481936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.481962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.482169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.482197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.482414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.482440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.482632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.482660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.482905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.482947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.483132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.483176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.483373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.483401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.483566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.483594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.483782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.483811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.483997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.484024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.484250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.484279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.484482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.484525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.484780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.484809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.485070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.485096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.485321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.485350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.485625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.485653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.485852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.485885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.486099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.486131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.486319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.486348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.486539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.486567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.486756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.486785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.486959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.486985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.487256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.487284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.487523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.487551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.487749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.487777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.488005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.488032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.488258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.488287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.488482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.488524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.488812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.488841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.489111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.489153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.015 [2024-07-14 02:21:45.489374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.015 [2024-07-14 02:21:45.489403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.015 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.489605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.489634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.489821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.489849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.490046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.490072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.490231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.490256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.490459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.490484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.490788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.490847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.491092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.491117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.491301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.491327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.491521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.491550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.491743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.491772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.492006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.492032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.492235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.492263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.492460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.492488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.492779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.492842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.493058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.493085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.493266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.493291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.493522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.493550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.493774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.493803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.494008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.494034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.494303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.494331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.494557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.494585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.494848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.494883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.495087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.495112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.495338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.495367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.495586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.495614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.495824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.495852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.496060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.496086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.496294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.496322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.496538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.496567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.496777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.496805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.497012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.497038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.497246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.497274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.497493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.497518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.497768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.497796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.498018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.498045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.016 qpair failed and we were unable to recover it. 00:34:40.016 [2024-07-14 02:21:45.498242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.016 [2024-07-14 02:21:45.498271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.498490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.498518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.498759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.498788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.498984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.499010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.499185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.499213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.499407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.499435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.499635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.499722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.500015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.500041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.500224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.500249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.500451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.500479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.500698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.500726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.500929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.500956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.501137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.501162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.501338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.501364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.501657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.501708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.501912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.501953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.502175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.502203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.502398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.502426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.502713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.502768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.502974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.503004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.503206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.503235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.503456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.503485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.503751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.503801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.504004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.504030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.504206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.504232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.504432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.504460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.504664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.504692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.504931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.504957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.505140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.505187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.505404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.505430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.505657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.505685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.505919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.505945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.506126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.506151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.506303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.506346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.506537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.506565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.506760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.506789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.506959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.506986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.507161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.507186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.507362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.507388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.507613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.507641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.017 qpair failed and we were unable to recover it. 00:34:40.017 [2024-07-14 02:21:45.507863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.017 [2024-07-14 02:21:45.507898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.508100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.508125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.508338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.508364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.508561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.508589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.508755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.508784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.508992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.509018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.509191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.509220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.509572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.509631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.509876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.509904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.510095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.510121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.510297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.510324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.510547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.510576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.510744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.510774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.511010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.511036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.511178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.511203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.511353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.511397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.511610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.511638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.511807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.511835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.512060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.512086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.512333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.512374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.512597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.512626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.512846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.512882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.513085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.513110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.513259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.513300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.513531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.513559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.513755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.513785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.513977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.514004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.514276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.514327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.514573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.514601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.514806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.514834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.515010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.515038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.515234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.515264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.515461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.515489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.515680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.515714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.515924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.515952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.516156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.516185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.516372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.516401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.516587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.516615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.516816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.516845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.517046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.517072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.517339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.018 [2024-07-14 02:21:45.517368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.018 qpair failed and we were unable to recover it. 00:34:40.018 [2024-07-14 02:21:45.517565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.517593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.517861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.517899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.518094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.518119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.518335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.518363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.518556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.518584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.518763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.518789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.519004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.519030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.519248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.519277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.519448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.519476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.519667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.519698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.519939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.519965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.520190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.520219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.520413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.520442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.520666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.520694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.520911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.520937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.521082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.521107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.521263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.521289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.521455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.521481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.521836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.521915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.522115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.522140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.522387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.522413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.522745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.522802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.523004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.523030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.523226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.523255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.523428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.523456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.523762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.523811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.524013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.524039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.524198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.524224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.524396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.524421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.524632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.524673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.524890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.524934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.525106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.525132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.525343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.525369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.525638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.525685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.525883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.525925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.526120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.526162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.526334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.526362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.526558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.526587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.526783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.526811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.526999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.527025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.527221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.527251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.019 qpair failed and we were unable to recover it. 00:34:40.019 [2024-07-14 02:21:45.527641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.019 [2024-07-14 02:21:45.527703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.527878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.527907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.528084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.528109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.528333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.528361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.528636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.528689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.528895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.528939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.529101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.529127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.529327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.529353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.529560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.529588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.529804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.529833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.530040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.530066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.530313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.530354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.530635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.530664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.530887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.530930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.531081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.531106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.531312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.531340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.531582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.531633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.531830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.531860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.532064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.532090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.532294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.532327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.532560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.532613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.532808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.532836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.533038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.533064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.533239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.533267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.533489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.533515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.533693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.533722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.533965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.533990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.534197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.534225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.534452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.534477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.534825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.534854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.535058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.535084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.535296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.535324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.535716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.535775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.535989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.536015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.536183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.536211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.536404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.536432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.536628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.536656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.536843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.536877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.537055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.537081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.537279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.537307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.537692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.020 [2024-07-14 02:21:45.537750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.020 qpair failed and we were unable to recover it. 00:34:40.020 [2024-07-14 02:21:45.537957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.537983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.538182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.538210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.538422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.538448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.538620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.538649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.538848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.538882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.539076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.539106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.539328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.539356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.539744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.539797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.540031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.540057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.540233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.540262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.540447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.540476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.540671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.540700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.540907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.540933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.541084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.541109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.541334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.541363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.541588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.541636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.541831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.541860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.542040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.542065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.542295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.542324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.542523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.542551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.542772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.542800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.543000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.543026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.543252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.543281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.543502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.543527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.543797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.543822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.543983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.544009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.544164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.544208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.544553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.544610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.544820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.544849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.545063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.545089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.545316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.545345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.545671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.545722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.021 qpair failed and we were unable to recover it. 00:34:40.021 [2024-07-14 02:21:45.546019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.021 [2024-07-14 02:21:45.546045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.546246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.546274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.546468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.546496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.546695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.546723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.546901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.546944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.547119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.547160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.547358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.547387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.547618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.547646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.547841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.547873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.548027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.548052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.548278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.548306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.548579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.548608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.548804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.548832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.549044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.549070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.549268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.549297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.549499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.549525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.549855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.549921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.550106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.550132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.550285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.550311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.550488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.550513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.550689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.550714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.550888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.550917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.551082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.551110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.551309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.551335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.551563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.551591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.551763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.551793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.551961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.551989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.552192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.552218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.552488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.552537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.552758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.552786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.552985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.553014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.553210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.553236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.553408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.553434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.553576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.553602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.553805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.553831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.554073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.554098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.554425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.554476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.554695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.554723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.554892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.554921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.555142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.022 [2024-07-14 02:21:45.555168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.022 qpair failed and we were unable to recover it. 00:34:40.022 [2024-07-14 02:21:45.555520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.555575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.555769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.555802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.556000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.556029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.556253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.556279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.556589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.556646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.556840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.556885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.557071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.557099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.557272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.557298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.557568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.557619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.557814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.557842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.558018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.558044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.558225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.558250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.558474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.558534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.558752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.558781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.558950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.558979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.559158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.559183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.559386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.559415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.559639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.559667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.559862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.559896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.560114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.560139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.560424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.560480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.560696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.560724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.560948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.560974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.561122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.561148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.561379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.561437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.561657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.561686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.561882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.561908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.562082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.562108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.562369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.562421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.562645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.562671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.562874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.562904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.563082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.563107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.563310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.563336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.563515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.563543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.563708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.563737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.563937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.563964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.564116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.564141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.564331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.564359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.564520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.564548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.564745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.564771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.023 [2024-07-14 02:21:45.564952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.023 [2024-07-14 02:21:45.564979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.023 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.565205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.565234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.565437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.565466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.565662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.565688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.565860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.565895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.566092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.566117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.566289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.566315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.566495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.566521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.566700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.566726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.566947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.566977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.567173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.567201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.567400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.567426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.567631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.567659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.567832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.567861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.568036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.568065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.568239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.568268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.568529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.568581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.568809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.568837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.569041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.569067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.569215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.569241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.569417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.569443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.569644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.569669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.569905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.569934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.570131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.570158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.570386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.570435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.570653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.570682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.570881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.570910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.571078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.571104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.571358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.571409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.571625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.571654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.571877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.571915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.572131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.572157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.572332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.572357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.572557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.572585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.572770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.572799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.572976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.573002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.573201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.573230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.573389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.573418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.024 qpair failed and we were unable to recover it. 00:34:40.024 [2024-07-14 02:21:45.573640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.024 [2024-07-14 02:21:45.573669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.573841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.573875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.574079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.574107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.574302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.574331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.574530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.574558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.574761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.574786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.574963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.574992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.575184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.575213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.575445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.575471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.575649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.575675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.575827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.575853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.576066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.576095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.576298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.576324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.576502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.576529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.576732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.576761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.576983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.577012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.577193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.577221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.577399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.577425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.577607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.577632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.577795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.577823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.578025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.578051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.578233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.578259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.578436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.578462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.578659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.578688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.578850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.578887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.579076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.579102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.579250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.579275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.579453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.579479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.579682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.579711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.579910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.579937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.580131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.580160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.580375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.580401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.580557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.580584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.580760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.580786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.580961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.580987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.025 [2024-07-14 02:21:45.581202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.025 [2024-07-14 02:21:45.581228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.025 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.581429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.581454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.581645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.581672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.581875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.581904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.582104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.582133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.582320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.582349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.582543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.582568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.582800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.582829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.583050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.583079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.583276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.583305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.583514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.583545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.583751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.583776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.583924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.583950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.584123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.584152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.584346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.584371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.584524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.584550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.584721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.584746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.584934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.584964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.585190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.585216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.585421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.585449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.585622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.585652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.585841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.585882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.586082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.586108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.586392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.586443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.586670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.586696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.586897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.586927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.587151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.587177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.587356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.587381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.587548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.587577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.587762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.587790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.588022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.588048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.588342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.588402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.588623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.588648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.588883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.588912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.589139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.589165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.589345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.589370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.589600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.589628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.589830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.589859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.590047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.026 [2024-07-14 02:21:45.590073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.026 qpair failed and we were unable to recover it. 00:34:40.026 [2024-07-14 02:21:45.590324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.590350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.590547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.590575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.590763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.590792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.590966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.590993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.591269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.591317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.591512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.591540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.591727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.591755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.591957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.591983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.592175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.592200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.592401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.592430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.592613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.592642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.592862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.592902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.593091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.593119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.593344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.593373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.593566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.593594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.593816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.593842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.594058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.594087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.594283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.594308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.594526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.594555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.594762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.594788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.594990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.595021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.595206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.595235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.595436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.595464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.595684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.595710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.595899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.595943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.596109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.596138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.596314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.596340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.596485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.596511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.596685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.596711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.596859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.596891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.597080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.597106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.597289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.597314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.597459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.597485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.597687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.597713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.597888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.597915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.598057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.598082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.027 [2024-07-14 02:21:45.598224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.027 [2024-07-14 02:21:45.598250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.027 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.598390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.598416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.598568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.598594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.598768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.598794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.598977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.599004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.599198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.599225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.599404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.599430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.599575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.599601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.599802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.599828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.600015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.600041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.600220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.600246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.600383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.600409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.600586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.600614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.600790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.600817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.600996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.601022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.601192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.601218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.601392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.601418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.601570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.601597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.601777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.601803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.601999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.602026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.602172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.602198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.602420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.602449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.602626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.602651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.602853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.602895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.603070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.603096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.603275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.603302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.603479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.603505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.603676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.603702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.603881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.603907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.604049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.604075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.604256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.604286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.604486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.604511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.604664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.604690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.604883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.604911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.605071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.605097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.605272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.605298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.605442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.605468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.605671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.605697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.605880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.605906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.028 qpair failed and we were unable to recover it. 00:34:40.028 [2024-07-14 02:21:45.606074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.028 [2024-07-14 02:21:45.606100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.606242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.606268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.606447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.606473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.606650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.606677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.606877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.606902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.607084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.607110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.607363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.607389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.607573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.607599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.607777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.607803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.607984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.608010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.608262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.608287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.608496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.608526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.608744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.608769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.608943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.608969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.609117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.609143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.609323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.609349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.609524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.609551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.609730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.609756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.609962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.609992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.610169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.610195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.610400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.610425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.610676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.610701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.610883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.610909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.611108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.611134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.611334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.611360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.611539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.611564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.611704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.611730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.611905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.611931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.612109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.612135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.612354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.612403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.612633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.612662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.612861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.612908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.613130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.613155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.613335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.029 [2024-07-14 02:21:45.613361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.029 qpair failed and we were unable to recover it. 00:34:40.029 [2024-07-14 02:21:45.613547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.613572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.613756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.613781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.613960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.613986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.614165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.614190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.614363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.614389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.614590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.614616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.614819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.614845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.615053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.615078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.615245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.615271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.615439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.615465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.615619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.615645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.615822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.615851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.616037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.616063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.616245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.616270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.616447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.616473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.616657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.616682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.616839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.616871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.617070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.617096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.617235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.617260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.617438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.617464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.617672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.617698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.617903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.617929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.618078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.618104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.618279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.618305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.618474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.618500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.618682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.618708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.618884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.618910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.619085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.619111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.619281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.619307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.619480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.619506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.619683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.619709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.619913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.619939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.620090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.620116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.620295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.620321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.030 qpair failed and we were unable to recover it. 00:34:40.030 [2024-07-14 02:21:45.620508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.030 [2024-07-14 02:21:45.620534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.620719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.620744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.620887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.620913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.621087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.621112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.621321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.621347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.621523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.621549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.621721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.621747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.621895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.621922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.622123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.622149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.622300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.622326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.622498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.622524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.622674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.622699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.622854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.622886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.623062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.623088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.623288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.623313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.623483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.623509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.623709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.623735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.623909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.623935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.624089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.624115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.624319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.624344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.624497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.624523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.624685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.624713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.624886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.624916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.625111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.625137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.625316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.625342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.625546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.625571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.625723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.625749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.625895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.625922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.626065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.626091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.626263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.626288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.626489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.626515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.626666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.626691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.626871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.626897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.627072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.627098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.627245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.627270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.627459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.627485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.627657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.627683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.627858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.031 [2024-07-14 02:21:45.627892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.031 qpair failed and we were unable to recover it. 00:34:40.031 [2024-07-14 02:21:45.628093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.628119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.628264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.628290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.628436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.628462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.628636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.628662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.628844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.628884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.629093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.629119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.629320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.629346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.629545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.629576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.629727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.629752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.629905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.629932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.630130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.630156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.630324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.630350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.630519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.630548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.630742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.630770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.630934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.630960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.631134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.631176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.631343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.631372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.631568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.631597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.631791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.631816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.632071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.632097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.632326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.632355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.632577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.632606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.632770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.632795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.632968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.632994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.633191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.633219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.633389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.633417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.633586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.633612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.633780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.633806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.633962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.633989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.634137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.634163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.634374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.634399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.634559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.634584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.634767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.634792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.634938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.634964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.635119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.635149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.635324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.635349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.635530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.635556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.635730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.635756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.635931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.635958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.636114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.636139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.636288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.032 [2024-07-14 02:21:45.636314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.032 qpair failed and we were unable to recover it. 00:34:40.032 [2024-07-14 02:21:45.636488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.636514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.636654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.636681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.636861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.636895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.637074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.637101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.637257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.637283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.637536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.637562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.637737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.637763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.637922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.637949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.638136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.638162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.638365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.638391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.638532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.638558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.638757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.638783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.638964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.638990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.639164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.639190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.639535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.639604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.639826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.639854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.640057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.640086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.640308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.640333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.640510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.640535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.640713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.640739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.640917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.640944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.641125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.641152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.641330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.641356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.641498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.641524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.641701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.641728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.641930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.641956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.642121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.642147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.642338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.642364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.642543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.642569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.642743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.642769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.642924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.642950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.643106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.643133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.643316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.643342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.643545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.643571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.643752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.643778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.643980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.644006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.644154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.644179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.644364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.644391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.644567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.644593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.644745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.644770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.644939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.644965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 02:21:45.645137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.033 [2024-07-14 02:21:45.645163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.645360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.645385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.645580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.645608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.645776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.645806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.645973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.645999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.646178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.646204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.646356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.646382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.646544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.646570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.646744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.646770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.646943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.646969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.647123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.647150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.647402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.647427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.647624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.647649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.647853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.647886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.648059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.648084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.648257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.648282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.648484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.648509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.648684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.648710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.648884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.648910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.649088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.649114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.649252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.649281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.649430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.649456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.649624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.649650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.649798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.649824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.650009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.650036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.650217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.650243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.650397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.650422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.650596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.650625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.650794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.650819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.650975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.651001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.651172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.651198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.651380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.651406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.651605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.651631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 02:21:45.651780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.034 [2024-07-14 02:21:45.651806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.651985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.652012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.652189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.652215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.652360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.652385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.652589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.652614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.652764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.652790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.652966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.652993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.653141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.653167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.653342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.653368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.653512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.653538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.653718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.653744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.653902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.653928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.654097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.654123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.654337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.654364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.654568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.654597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.654802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.654828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.654984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.655011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.655221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.655247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.655418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.655444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.655610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.655636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.655789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.655815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.655967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.655994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.656151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.656177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.656353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.656379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.656633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.656659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.656840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.656873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.657019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.657044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.657245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.657270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.657456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.657482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.657629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.657655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.657817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.657843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.658008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.658050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.658263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.658292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.658499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.658549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.658778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.658823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 02:21:45.659038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.035 [2024-07-14 02:21:45.659069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.659273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.659320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.659558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.659603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.659825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.659856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.660075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.660109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.660466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.660523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.660763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.660812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.660999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.661027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.661278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.661323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.661661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.661727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.661958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.661986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.662227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.662271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.662653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.662718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.663002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.663037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.663283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.663328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.663544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.663588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.663778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.663806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.664015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.664060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.664241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.664286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.664534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.664582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.664812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.664839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.665026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.665072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.665280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.665325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.665555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.665598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.665775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.665802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.666023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.666053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.666252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.666296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.666495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.666538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.666715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.666741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.666939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.666969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.667154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.667197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.036 [2024-07-14 02:21:45.667378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.036 [2024-07-14 02:21:45.667420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.036 qpair failed and we were unable to recover it. 00:34:40.037 [2024-07-14 02:21:45.667623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.037 [2024-07-14 02:21:45.667666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.037 qpair failed and we were unable to recover it. 00:34:40.037 [2024-07-14 02:21:45.667856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.037 [2024-07-14 02:21:45.667889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.037 qpair failed and we were unable to recover it. 00:34:40.037 [2024-07-14 02:21:45.668117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.037 [2024-07-14 02:21:45.668162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.037 qpair failed and we were unable to recover it. 00:34:40.037 [2024-07-14 02:21:45.668390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.037 [2024-07-14 02:21:45.668433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.037 qpair failed and we were unable to recover it. 00:34:40.037 [2024-07-14 02:21:45.668630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.037 [2024-07-14 02:21:45.668673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.037 qpair failed and we were unable to recover it. 00:34:40.037 [2024-07-14 02:21:45.668843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.037 [2024-07-14 02:21:45.668874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.037 qpair failed and we were unable to recover it. 00:34:40.037 [2024-07-14 02:21:45.669055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.037 [2024-07-14 02:21:45.669099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.037 qpair failed and we were unable to recover it. 00:34:40.037 [2024-07-14 02:21:45.669298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.037 [2024-07-14 02:21:45.669342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.037 qpair failed and we were unable to recover it. 00:34:40.037 [2024-07-14 02:21:45.669554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.037 [2024-07-14 02:21:45.669597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.037 qpair failed and we were unable to recover it. 00:34:40.037 [2024-07-14 02:21:45.669753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.037 [2024-07-14 02:21:45.669779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.037 qpair failed and we were unable to recover it. 00:34:40.037 [2024-07-14 02:21:45.670002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.037 [2024-07-14 02:21:45.670046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.037 qpair failed and we were unable to recover it. 00:34:40.037 [2024-07-14 02:21:45.670252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.037 [2024-07-14 02:21:45.670294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.037 qpair failed and we were unable to recover it. 00:34:40.037 [2024-07-14 02:21:45.670488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.037 [2024-07-14 02:21:45.670517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.037 qpair failed and we were unable to recover it. 00:34:40.037 [2024-07-14 02:21:45.670733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.037 [2024-07-14 02:21:45.670759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.037 qpair failed and we were unable to recover it. 00:34:40.037 [2024-07-14 02:21:45.670959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.037 [2024-07-14 02:21:45.671007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.037 qpair failed and we were unable to recover it. 00:34:40.314 [2024-07-14 02:21:45.671212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.314 [2024-07-14 02:21:45.671256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.314 qpair failed and we were unable to recover it. 00:34:40.314 [2024-07-14 02:21:45.671482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.314 [2024-07-14 02:21:45.671527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.314 qpair failed and we were unable to recover it. 00:34:40.314 [2024-07-14 02:21:45.671704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.314 [2024-07-14 02:21:45.671730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.314 qpair failed and we were unable to recover it. 00:34:40.314 [2024-07-14 02:21:45.671931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.314 [2024-07-14 02:21:45.671975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.314 qpair failed and we were unable to recover it. 00:34:40.314 [2024-07-14 02:21:45.672211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.314 [2024-07-14 02:21:45.672255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.314 qpair failed and we were unable to recover it. 00:34:40.314 [2024-07-14 02:21:45.672461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.314 [2024-07-14 02:21:45.672505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.314 qpair failed and we were unable to recover it. 00:34:40.314 [2024-07-14 02:21:45.672682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.314 [2024-07-14 02:21:45.672708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.314 qpair failed and we were unable to recover it. 00:34:40.314 [2024-07-14 02:21:45.672892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.314 [2024-07-14 02:21:45.672918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.314 qpair failed and we were unable to recover it. 00:34:40.314 [2024-07-14 02:21:45.673126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.673168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.673394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.673438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.673669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.673713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.673936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.673981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.674217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.674261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.674470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.674513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.674720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.674747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.674915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.674944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.675166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.675210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.675384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.675427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.675605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.675631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.675790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.675816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.676031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.676061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.676277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.676320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.676526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.676569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.676721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.676747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.676970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.677015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.677236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.677279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.677481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.677526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.677698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.677724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.677945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.677975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.678231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.678275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.678579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.678635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.678837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.678863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.679069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.679111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.679344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.679387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.679568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.679611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.679816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.679843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.680054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.680080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.680282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.680325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.680524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.680553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.680773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.680804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.681045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.681090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.681309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.681352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.681713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.681778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.682001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.682046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.682280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.682323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.682496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.682539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.682745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.682772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.682975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.683019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.315 qpair failed and we were unable to recover it. 00:34:40.315 [2024-07-14 02:21:45.683221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.315 [2024-07-14 02:21:45.683265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.683467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.683511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.683693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.683721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.683919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.683963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.684163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.684206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.684431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.684475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.684678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.684704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.684989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.685019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.685243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.685287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.685516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.685560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.685746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.685773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.686007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.686052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.686283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.686326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.686504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.686549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.686726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.686752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.687025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.687070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.687280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.687322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.687509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.687552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.687767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.687809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.688060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.688102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.688248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.688275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.688446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.688493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.688701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.688742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.688960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.689002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.689240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.689283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.689524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.689567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.689749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.689776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.690076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.690120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.690321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.690365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.690594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.690638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.690844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.690899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.691133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.691180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.691334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.691361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.691619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.691664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.691862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.691895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.692083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.692110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.692320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.692347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.692564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.692607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.692774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.692800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.692998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.693042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.316 qpair failed and we were unable to recover it. 00:34:40.316 [2024-07-14 02:21:45.693248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.316 [2024-07-14 02:21:45.693293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.693538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.693582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.693775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.693801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.693981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.694008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.694204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.694247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.694708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.694766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.694954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.694995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.695186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.695230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.695474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.695518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.695726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.695751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.695962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.695991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.696208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.696251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.696515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.696559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.696783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.696809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.697009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.697053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.697231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.697275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.697482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.697524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.697723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.697749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.697963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.698008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.698246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.698290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.698470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.698513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.698724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.698764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.698946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.698989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.699234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.699278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.699559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.699608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.699783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.699808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.700034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.700078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.700280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.700323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.700566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.700608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.700779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.700805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.701012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.701055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.701293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.701340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.701629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.701673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.701934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.701959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.702181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.702226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.702427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.702470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.702649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.702691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.702901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.702941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.703166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.703208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.703411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.703455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.317 [2024-07-14 02:21:45.703724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.317 [2024-07-14 02:21:45.703767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.317 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.703964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.703989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.704201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.704230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.704454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.704498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.704720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.704763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.704978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.705022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.705231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.705273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.705502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.705545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.705754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.705781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.705961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.705988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.706140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.706167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.706396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.706439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.706710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.706754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.706971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.707015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.707305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.707356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.707608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.707651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.707853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.707900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.708135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.708161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.708484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.708537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.708742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.708785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.709007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.709033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.709207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.709251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.709462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.709504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.709674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.709699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.709887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.709914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.710159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.710203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.710404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.710448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.710633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.710660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.710877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.710904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.711111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.711140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.711442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.711485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.711756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.711803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.712003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.712030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.712234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.712278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.712655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.712706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.712897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.712924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.713099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.713143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.713411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.713454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.713727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.713789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.714012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.714039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.318 [2024-07-14 02:21:45.714243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.318 [2024-07-14 02:21:45.714287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.318 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.714514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.714558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.714814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.714840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.715024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.715050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.715228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.715272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.715445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.715488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.715725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.715768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.715976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.716020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.716223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.716268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.716467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.716511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.716700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.716726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.716976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.717020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.717227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.717271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.717496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.717539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.717751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.717777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.717945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.717989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.718234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.718278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.718527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.718570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.718765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.718807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.719051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.719095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.719337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.719382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.719622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.719665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.719860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.719891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.720093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.720121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.720423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.720465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.720691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.720719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.720935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.720962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.721156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.721206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.721435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.721478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.721728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.721754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.721923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.721952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.722178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.722225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.722426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.722470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.722696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.722722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.723001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.319 [2024-07-14 02:21:45.723044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.319 qpair failed and we were unable to recover it. 00:34:40.319 [2024-07-14 02:21:45.723279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.723322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.723573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.723619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.723877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.723904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.724149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.724194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.724512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.724557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.724773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.724818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.725000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.725028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.725265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.725309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.725551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.725597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.725820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.725848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.726073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.726102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.726524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.726596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.726776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.726803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.727021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.727052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.727256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.727301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.727604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.727661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.727885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.727914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.728083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.728138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.728402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.728446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.728769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.728834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.729051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.729077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.729259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.729306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.729537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.729581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.729768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.729803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.730062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.730118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.730377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.730431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.730690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.730734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.730923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.730967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.731215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.731261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.731470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.731516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.731811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.731838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.732042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.732071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.732318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.732364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.732559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.732589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.732774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.732806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.733062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.733106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.733306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.733358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.733597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.733642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.733905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.733932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.734145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.320 [2024-07-14 02:21:45.734191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.320 qpair failed and we were unable to recover it. 00:34:40.320 [2024-07-14 02:21:45.734375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.734419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.734622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.734667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.734961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.734988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.735214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.735259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.735468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.735513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.735763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.735809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.736013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.736040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.736286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.736332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.736585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.736630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.736914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.736950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.737205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.737249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.737561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.737614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.737793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.737820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.738040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.738068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.738274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.738303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.738686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.738744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.738920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.738952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.739201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.739246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.739528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.739573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.739831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.739872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.740063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.740091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.740320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.740364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.740571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.740617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.740795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.740828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.741037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.741065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.741238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.741282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.741552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.741606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.741817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.741845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.742038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.742087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.742358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.742402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.742659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.742702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.742885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.742912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.743090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.743116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.743319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.743363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.743604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.743648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.743831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.743858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.744090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.744133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.744345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.744388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.744593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.744636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.744793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.744819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.321 [2024-07-14 02:21:45.745029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.321 [2024-07-14 02:21:45.745074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.321 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.745320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.745363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.745550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.745592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.745774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.745816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.746044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.746089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.746270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.746313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.746559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.746602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.746788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.746815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.747081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.747125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.747338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.747380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.747556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.747601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.747808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.747834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.748044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.748074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.748338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.748382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.748622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.748666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.748883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.748910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.749117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.749160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.749360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.749405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.749604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.749648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.749799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.749826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.750106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.750149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.750380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.750423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.750590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.750616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.750824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.750855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.751055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.751100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.751307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.751334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.751539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.751581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.751762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.751789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.752015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.752059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.752260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.752289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.752530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.752574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.752759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.752785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.752987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.753032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.753302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.753344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.753547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.753592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.753792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.753818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.754066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.754109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.754353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.754379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.754606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.754649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.754861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.754893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.755075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.755101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.322 [2024-07-14 02:21:45.755384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.322 [2024-07-14 02:21:45.755432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.322 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.755707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.755751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.755935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.755962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.756136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.756179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.756384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.756414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.756624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.756653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.756859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.756890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.757051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.757077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.757276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.757320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.757590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.757634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.757822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.757848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.758008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.758034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.758240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.758284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.758524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.758567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.758758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.758784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.758981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.759006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.759223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.759266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.759461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.759504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.759882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.759927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.760132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.760158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.760332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.760376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.760606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.760649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.760857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.760893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.761048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.761074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.761279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.761322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.761550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.761592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.761796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.761822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.761986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.762013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.762217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.762260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.762531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.762573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.762749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.762774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.762972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.762998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.763269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.763313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.763527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.763569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.763797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.763823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.764007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.323 [2024-07-14 02:21:45.764034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.323 qpair failed and we were unable to recover it. 00:34:40.323 [2024-07-14 02:21:45.764311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.764354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.764575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.764618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.764803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.764828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.765037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.765063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.765235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.765278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.765478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.765522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.765805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.765858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.766037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.766063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.766238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.766281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.766476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.766505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.766716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.766763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.766956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.766999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.767204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.767247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.767480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.767522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.767703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.767729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.767924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.767969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.768198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.768242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.768429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.768456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.768658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.768685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.768831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.768858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.769066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.769109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.769322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.769350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.769578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.769622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.769776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.769803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.770010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.770054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.770258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.770300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.770652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.770716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.770919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.770963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.771178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.771221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.771447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.771490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.771668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.771695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.771878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.771904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.772108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.772135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.772332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.772378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.772582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.772625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.772800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.772828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.773062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.773106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.773300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.773343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.773514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.773558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.773713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.773739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.773944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.324 [2024-07-14 02:21:45.773974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.324 qpair failed and we were unable to recover it. 00:34:40.324 [2024-07-14 02:21:45.774218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.774260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.774647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.774702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.774887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.774914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.775090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.775116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.775308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.775352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.775542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.775585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.775760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.775787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.775936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.775963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.776175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.776217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.776551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.776602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.776801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.776827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.777057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.777101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.777304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.777348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.777553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.777597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.777797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.777821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.778046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.778090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.778293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.778336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.778537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.778567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.778789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.778830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.779149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.779178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.779406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.779449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.779654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.779698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.779907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.779941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.780149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.780192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.780424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.780467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.780800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.780879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.781083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.781125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.781372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.781417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.781730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.781760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.781967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.781993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.782222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.782265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.782507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.782550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.782767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.782807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.783016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.783042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.783222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.783266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.783462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.783491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.783753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.783796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.784008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.784035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.784249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.784278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.784530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.784574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.325 [2024-07-14 02:21:45.784833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.325 [2024-07-14 02:21:45.784859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.325 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.785063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.785089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.785319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.785362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.785595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.785639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.785892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.785918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.786098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.786125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.786415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.786462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.786728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.786771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.786963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.787005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.787251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.787295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.787485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.787528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.787715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.787741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.787994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.788039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.788263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.788307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.788514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.788557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.788748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.788775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.789043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.789087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.789373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.789417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.789692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.789734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.789937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.789982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.790218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.790261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.790563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.790617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.790824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.790849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.791078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.791105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.791331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.791375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.791557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.791604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.791924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.791950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.792196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.792240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.792512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.792555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.792768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.792794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.792971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.792997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.793167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.793211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.793404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.793448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.793892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.793937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.794118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.794144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.794415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.794458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.794659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.794703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.794888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.794915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.795106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.795131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.795316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.795360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.795640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.326 [2024-07-14 02:21:45.795686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.326 qpair failed and we were unable to recover it. 00:34:40.326 [2024-07-14 02:21:45.795872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.795897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.796187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.796212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.796440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.796483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.796708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.796752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.796939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.796966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.797193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.797236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.797441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.797485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.797710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.797754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.797940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.797966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.798138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.798182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.798376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.798420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.798657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.798701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.798909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.798935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.799125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.799171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.799397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.799439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.799668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.799711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.799915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.799940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.800147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.800173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.800407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.800450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.800652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.800695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.800878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.800906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.801126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.801152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.801360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.801404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.801605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.801648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.801833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.801885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.802074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.802099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.802313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.802342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.802580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.802623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.802825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.802850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.803052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.803079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.803282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.803325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.803554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.803597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.803794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.803820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.804023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.804050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.804313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.327 [2024-07-14 02:21:45.804356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.327 qpair failed and we were unable to recover it. 00:34:40.327 [2024-07-14 02:21:45.804558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.804602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.804788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.804813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.805030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.805074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.805252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.805294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.805537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.805580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.805730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.805757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.805965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.806009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.806194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.806223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.806430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.806457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.806657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.806683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.806967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.807012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.807236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.807278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.807490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.807517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.807718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.807745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.807956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.807985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.808236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.808281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.808513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.808557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.808742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.808768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.808993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.809036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.809265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.809310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.809540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.809583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.809797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.809823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.809978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.810004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.810236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.810278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.810483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.810526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.810697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.810723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.810957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.811001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.811194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.811223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.811471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.811514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.811691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.811724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.811944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.811971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.812168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.812213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.812418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.812463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.812643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.812670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.812850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.812881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.813084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.813128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.813328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.813372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.813724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.813777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.813959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.813986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.814196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.814223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.815123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.328 [2024-07-14 02:21:45.815153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.328 qpair failed and we were unable to recover it. 00:34:40.328 [2024-07-14 02:21:45.815500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.815558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.815716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.815743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.815960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.816005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.816209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.816236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.816411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.816455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.816639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.816665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.816847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.816880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.817081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.817125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.817343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.817385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.817662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.817705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.817913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.817944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.818214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.818258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.818465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.818509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.818719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.818746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.818940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.818985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.819234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.819277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.819640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.819683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.819847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.819889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.820097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.820140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.820457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.820501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.820678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.820727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.820899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.820942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.821173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.821216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.821527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.821569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.821731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.821758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.822045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.822090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.822290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.822334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.822548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.822591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.822766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.822797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.822999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.823045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.823285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.823328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.823531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.823575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.823731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.823757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.824014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.824061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.824335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.824378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.824609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.824653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.824834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.824862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.825157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.825201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.825445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.825491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.825691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.825736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.329 qpair failed and we were unable to recover it. 00:34:40.329 [2024-07-14 02:21:45.826001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.329 [2024-07-14 02:21:45.826027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.826244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.826275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.826538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.826582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.826844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.826883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.827090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.827116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.827391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.827434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.827670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.827713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.827896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.827923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.828157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.828183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.828389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.828433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.828698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.828742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.828929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.828955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.829199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.829243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.829580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.829623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.829829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.829856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.830048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.830092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.830330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.830375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.830623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.830666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.830872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.830899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.831077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.831105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.831307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.831351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.831625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.831668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.831853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.831885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.832067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.832094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.832323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.832366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.832597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.832641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.832858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.832890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.833094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.833120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.833340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.833386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.833621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.833663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.833856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.833890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.834070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.834114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.834335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.834379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.834575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.834619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.834795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.834821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.835011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.835038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.835270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.835314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.835558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.835602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.835789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.835817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.836048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.836092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.836353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.836396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.330 qpair failed and we were unable to recover it. 00:34:40.330 [2024-07-14 02:21:45.836602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.330 [2024-07-14 02:21:45.836646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.836827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.836853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.837098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.837141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.837324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.837371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.837605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.837650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.837800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.837827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.837990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.838017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.838216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.838260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.838536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.838579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.838775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.838802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.838980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.839008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.839185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.839230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.839438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.839482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.839682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.839725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.839933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.839978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.840207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.840250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.840482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.840526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.840672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.840699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.840905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.840931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.841134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.841179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.841385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.841429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.841701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.841746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.841973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.842017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.842220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.842265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.842443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.842487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.842663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.842691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.842885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.842912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.843108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.843157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.843366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.843410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.843611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.843655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.843832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.843859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.844073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.844099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.844307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.844351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.844554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.844598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.844798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.844825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.331 [2024-07-14 02:21:45.845041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.331 [2024-07-14 02:21:45.845070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.331 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.845300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.845344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.845551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.845597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.845776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.845804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.846041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.846087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.846294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.846339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.846551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.846605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.846788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.846815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.847028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.847074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.847314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.847358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.847595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.847626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.847826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.847856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.848065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.848093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.848267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.848296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.848497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.848526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.848754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.848782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.848971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.848997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.849159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.849184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.849367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.849393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.849578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.849605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.849810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.849837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.850002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.850028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.850241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.850268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.850451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.850477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.850651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.850678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.850838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.850872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.851080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.851106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.851284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.851310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.851531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.851560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.851820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.851849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.852065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.852091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.852302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.852330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.852551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.852582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.855878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.855927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.856139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.856167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.856343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.856371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.856556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.856585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.856790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.856820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.857036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.857064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.857259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.857285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.857491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.332 [2024-07-14 02:21:45.857520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.332 qpair failed and we were unable to recover it. 00:34:40.332 [2024-07-14 02:21:45.857743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.857772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.858006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.858033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.858193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.858219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.858416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.858442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.858650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.858680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.858926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.858954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.859147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.859174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.859358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.859385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.859570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.859597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.859807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.859835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.860078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.860109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.860408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.860438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.860724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.860752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.860962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.860990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.861169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.861197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.861372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.861413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.861598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.861624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.861877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.861905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.862111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.862151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.862316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.862344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.862568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.862594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.862774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.862799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.863003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.863030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.863211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.863236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.863503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.863527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.863754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.863783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.863986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.864012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.864229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.864254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.864516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.864540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.864740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.864765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.864947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.864974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.865126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.865167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.865411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.865435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.865629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.865654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.865807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.865832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.866014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.866040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.866221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.866247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.866421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.866446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.866626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.866652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.866799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.866825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.867038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.867064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.333 qpair failed and we were unable to recover it. 00:34:40.333 [2024-07-14 02:21:45.867236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.333 [2024-07-14 02:21:45.867262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.867439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.867464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.867607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.867633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.867846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.867880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.868028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.868058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.868310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.868362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.868704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.868767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.868995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.869021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.869198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.869224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.869404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.869430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.869733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.869793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.869992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.870018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.870224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.870255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.870423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.870452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.870794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.870842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.871074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.871100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.871280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.871305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.871644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.871696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.871925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.871954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.872132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.872158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.872361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.872387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.872544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.872570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.872715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.872740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.872885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.872912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.873100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.873125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.873327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.873355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.873690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.873739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.873917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.873943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.874125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.874152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.874322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.874348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.874507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.874533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.874709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.874742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.874915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.874941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.875129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.875171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.875508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.875570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.875785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.875810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.875987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.876013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.876228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.876253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.876510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.876535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.876733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.876759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.876911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.334 [2024-07-14 02:21:45.876937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.334 qpair failed and we were unable to recover it. 00:34:40.334 [2024-07-14 02:21:45.877113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.877153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.877343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.877371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.877588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.877614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.877806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.877831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.877993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.878019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.878185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.878210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.878357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.878383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.878559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.878585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.878765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.878790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.878940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.878967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.879173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.879199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.879419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.879447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.879635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.879663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.879854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.879890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.880064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.880089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.880275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.880304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.880523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.880551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.880750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.880778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.880988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.881014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.881241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.881270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.881503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.881532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.881689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.881717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.881910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.881936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.882159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.882188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.882357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.882385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.882556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.882584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.882785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.882811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.883004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.883033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.883258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.883284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.883549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.883597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.883818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.883843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.884075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.884102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.884351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.884380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.884569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.884597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.884796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.884822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.884984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.885010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.885232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.885260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.885507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.885532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.885688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.885714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.885879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.885908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.886106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.886134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.335 [2024-07-14 02:21:45.886353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.335 [2024-07-14 02:21:45.886381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.335 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.886548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.886574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.886769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.886797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.886995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.887024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.887246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.887275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.887451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.887477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.887670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.887698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.887888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.887917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.888080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.888108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.888280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.888305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.888495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.888524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.888742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.888770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.888930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.888959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.889161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.889187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.889393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.889419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.889694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.889742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.889929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.889958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.890132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.890162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.890312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.890354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.890525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.890553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.890778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.890803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.890979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.891006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.891229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.891257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.891459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.891505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.891667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.891695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.891960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.891986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.892225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.892250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.892401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.892426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.892632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.892680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.892889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.892916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.893088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.893114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.893320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.893346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.893565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.893615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.893837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.893862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.894124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.894150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.894415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.336 [2024-07-14 02:21:45.894444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.336 qpair failed and we were unable to recover it. 00:34:40.336 [2024-07-14 02:21:45.894683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.894731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.894944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.894970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.895170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.895199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.895403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.895431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.895667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.895711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.895928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.895954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.896107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.896133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.896365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.896412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.896688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.896740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.896938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.896964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.897139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.897167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.897359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.897388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.897649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.897697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.897916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.897942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.898171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.898199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.898424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.898453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.898659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.898706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.898908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.898934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.899113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.899139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.899358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.899386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.899624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.899649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.899830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.899855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.900090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.900119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.900347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.900375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.900600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.900626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.900773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.900799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.901022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.901051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.901317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.901346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.901563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.901610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.901782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.901807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.901993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.902020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.902244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.902272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.902537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.902583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.902782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.902808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.902995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.903022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.903222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.903254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.903473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.903520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.903783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.903809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.904039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.904068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.904333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.904359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.904603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.904628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.337 qpair failed and we were unable to recover it. 00:34:40.337 [2024-07-14 02:21:45.904852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.337 [2024-07-14 02:21:45.904883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.905083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.905111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.905385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.905413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.905666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.905692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.905903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.905929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.906101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.906129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.906322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.906351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.906523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.906551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.906760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.906786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.906994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.907023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.907217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.907246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.907450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.907498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.907719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.907744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.907920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.907949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.908168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.908196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.908432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.908479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.908694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.908720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.908919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.908948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.909215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.909244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.909482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.909529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.909718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.909744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.909971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.910000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.910199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.910228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.910427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.910453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.910661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.910687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.910894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.910924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.911119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.911148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.911324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.911350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.911567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.911596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.911794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.911820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.912037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.912066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.912261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.912289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.912535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.912583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.912791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.912816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.913002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.913028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.913233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.913262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.913506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.913555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.913790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.913816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.914019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.914045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.914261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.914288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.914508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.914536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.338 [2024-07-14 02:21:45.914766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.338 [2024-07-14 02:21:45.914791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.338 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.914973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.914999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.915227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.915256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.915532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.915560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.915756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.915782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.915949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.915978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.916249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.916278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.916463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.916491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.916684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.916713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.916913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.916939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.917165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.917194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.917386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.917414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.917608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.917636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.917821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.917850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.918073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.918102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.918307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.918332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.918484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.918510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.918707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.918732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.918964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.918991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.919146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.919188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.919390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.919415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.919565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.919594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.919752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.919778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.919985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.920013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.920194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.920219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.920409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.920437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.920663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.920692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.920862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.920895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.921089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.921115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.921342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.921369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.921563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.921591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.921781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.921810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.922030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.922056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.922286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.922314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.922582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.922608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.922829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.922857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.923065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.923090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.923254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.923282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.923482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.923510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.923728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.923756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.923968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.923994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.924219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.339 [2024-07-14 02:21:45.924247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.339 qpair failed and we were unable to recover it. 00:34:40.339 [2024-07-14 02:21:45.924445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.924474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.924669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.924697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.924870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.924896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.925165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.925194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.925417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.925445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.925622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.925650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.925819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.925848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.926064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.926092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.926309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.926350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.926589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.926620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.926851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.926887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.927074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.927104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.927303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.927332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.927540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.927568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.927771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.927798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.928002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.928032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.928208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.928237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.340 [2024-07-14 02:21:45.928459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.340 [2024-07-14 02:21:45.928488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.340 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.928713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.928739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.928959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.928986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.929176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.929201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.929413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.929456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.929635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.929661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.929903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.929933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.930142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.930172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.930375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.930405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.930600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.930627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.930822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.930852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.931058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.931088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.931262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.931291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.931524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.931551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.931714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.931740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.931946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.931978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.932155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.932184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.932380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.932407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.932645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.932675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.932840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.932874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.934908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.934942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.935182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.935210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.935443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.935473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.935676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.935705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.935923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.935953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.936166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.936193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.936372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.341 [2024-07-14 02:21:45.936398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.341 qpair failed and we were unable to recover it. 00:34:40.341 [2024-07-14 02:21:45.936594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.936622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.936823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.936851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.937090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.937123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.937336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.937366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.937569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.937597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.937828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.937858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.938071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.938098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.938283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.938309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.938485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.938512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.938738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.938768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.938945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.938972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.939204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.939234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.939436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.939465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.939641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.939669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.943878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.943910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.944141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.944171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.944378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.944407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.944622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.944651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.944845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.944879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.945118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.945149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.945348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.945377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.945572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.945602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.945803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.945832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.946068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.946099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.946303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.946333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.946534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.946564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.946796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.946822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.947070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.947100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.947294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.947323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.947501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.947530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.947756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.947783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.947988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.948018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.948249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.948277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.948492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.948535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.948715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.948742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.948942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.948972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.949167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.949196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.949380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.949422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.949733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.949759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.949951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.949980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.342 [2024-07-14 02:21:45.950197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.342 [2024-07-14 02:21:45.950224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.342 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.950439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.950482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.950720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.950751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.951005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.951036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.951240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.951268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.951474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.951503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.951714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.951756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.951945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.951972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.952216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.952246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.952439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.952468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.952645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.952673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.952841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.952875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.953085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.953113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.953309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.953338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.953542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.953569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.953766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.953795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.954002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.954032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.957879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.957915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.958134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.958162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.958409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.958441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.958674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.958703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.958901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.958931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.959114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.959141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.959434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.959464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.959650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.959692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.959860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.959907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.960134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.960162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.960383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.960412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.960595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.960622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.960807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.960837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.961052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.961078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.961315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.961345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.961544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.961574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.961803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.961835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.962086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.962113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.962298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.962328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.962566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.962593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.962842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.962874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.963064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.963090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.963295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.963325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.963548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.963577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-14 02:21:45.963782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.343 [2024-07-14 02:21:45.963812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.343 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.964021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.964048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.964251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.964292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.964533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.964564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.964787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.964816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.965005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.965033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.965409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.965455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.965699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.965729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.965907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.965951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.966184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.966210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.966431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.966471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.966698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.966729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.966962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.966989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.967194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.967221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.967403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.967434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.967637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.967667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.967852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.967889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.968088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.968119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.968337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.968367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.968558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.968587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.968808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.968837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.969039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.969065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.969299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.969328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.969503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.969531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.969733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.969759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.969999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.970029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.970226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.970257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.970473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.970500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.970741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.970775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.970978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.971009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.971221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.971247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.971526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.971552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.971778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.971804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.971973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.972001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.972178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.972219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.972403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.972430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.972588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.972614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.972797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.972823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.973036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.973063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.973232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.973258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.973514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.344 [2024-07-14 02:21:45.973545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-14 02:21:45.973742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.973771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.973937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.973967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.974162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.974203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.974411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.974440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.974635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.974664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.974892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.974922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.975155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.975183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.975380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.975407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.975544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.975570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.975783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.975826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.976031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.976059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.976253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.976281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.976502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.976531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.976693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.976723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.976909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.976936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.977137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.977167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.977387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.977416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.977755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.977807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.978036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.978064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.978248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.978277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.978497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.978526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.978750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.978777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.978986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.979013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.979187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.979214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.979413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.979439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.979619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.979648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.979847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.979887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.980066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.980096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.980331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.980360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.980698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.980756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.980960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.980987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.981165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.981192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.981420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.981450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.981668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.981694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.981898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.345 [2024-07-14 02:21:45.981943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-14 02:21:45.982169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.982200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.982390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.982420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.982615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.982645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.982829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.982859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.983048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.983077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.983271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.983300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.983502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.983529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.983729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.983758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.983956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.983986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.984216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.984262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.984480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.984507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.984728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.984757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.984940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.984968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.985194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.985223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.985403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.985429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.985621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.985649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.985878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.985908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.986107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.986133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.986308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.986335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.986492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.986519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.986739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.986769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.986964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.986995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.987198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.987224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.987447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.987476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.346 [2024-07-14 02:21:45.987662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.346 [2024-07-14 02:21:45.987690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.346 qpair failed and we were unable to recover it. 00:34:40.639 [2024-07-14 02:21:45.987882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.639 [2024-07-14 02:21:45.987919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.639 qpair failed and we were unable to recover it. 00:34:40.639 [2024-07-14 02:21:45.988117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.639 [2024-07-14 02:21:45.988144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.639 qpair failed and we were unable to recover it. 00:34:40.639 [2024-07-14 02:21:45.988368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.639 [2024-07-14 02:21:45.988398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.639 qpair failed and we were unable to recover it. 00:34:40.639 [2024-07-14 02:21:45.988595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.639 [2024-07-14 02:21:45.988626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.639 qpair failed and we were unable to recover it. 00:34:40.639 [2024-07-14 02:21:45.988818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.639 [2024-07-14 02:21:45.988847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.639 qpair failed and we were unable to recover it. 00:34:40.639 [2024-07-14 02:21:45.989051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.639 [2024-07-14 02:21:45.989079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.639 qpair failed and we were unable to recover it. 00:34:40.639 [2024-07-14 02:21:45.989272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.639 [2024-07-14 02:21:45.989302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.639 qpair failed and we were unable to recover it. 00:34:40.639 [2024-07-14 02:21:45.989466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.639 [2024-07-14 02:21:45.989499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.639 qpair failed and we were unable to recover it. 00:34:40.639 [2024-07-14 02:21:45.989691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.639 [2024-07-14 02:21:45.989721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.639 qpair failed and we were unable to recover it. 00:34:40.639 [2024-07-14 02:21:45.989892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.639 [2024-07-14 02:21:45.989919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.639 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.990138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.990167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.990336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.990367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.990633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.990689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.990915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.990944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.991173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.991202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.991407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.991437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.991628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.991657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.991856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.991897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.992124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.992153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.992318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.992347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.992545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.992614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.992850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.992882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.993118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.993147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.993341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.993370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.993758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.993812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.994007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.994035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.994194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.994221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.994446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.994475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.994893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.994959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.995162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.995189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.995342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.995369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.995566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.995595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.995753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.995782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.995952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.995979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.996179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.996208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.996404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.996432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.996623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.996675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.996851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.996884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.997112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.997141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.997337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.997366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.997638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.997692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.997876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.997903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.998079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.998106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.998258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.998286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.998596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.998652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.998853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.998885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.999041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.999068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.999244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.999275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.999473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.640 [2024-07-14 02:21:45.999502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.640 qpair failed and we were unable to recover it. 00:34:40.640 [2024-07-14 02:21:45.999702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:45.999729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:45.999965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:45.999993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.000214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.000244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.000464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.000494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.000698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.000726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.000956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.000987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.001191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.001220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.001416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.001446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.001621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.001650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.001855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.001890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.002072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.002099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.002348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.002375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.002580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.002606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.002821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.002851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.003028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.003055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.003280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.003310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.003534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.003563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.003744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.003771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.003957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.003985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.004212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.004242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.004465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.004492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.004695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.004725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.004950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.004978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.005160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.005187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.005389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.005416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.005782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.005832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.006017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.006045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.006226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.006253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.006458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.006503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.006684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.006714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.006900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.006945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.007101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.007128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.007304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.007331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.007548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.007592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.007772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.007800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.008006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.008034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.008204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.008233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.008403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.008433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.008600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.008630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.008798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.641 [2024-07-14 02:21:46.008825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.641 qpair failed and we were unable to recover it. 00:34:40.641 [2024-07-14 02:21:46.009028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.009056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.009230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.009257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.009498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.009525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.009749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.009779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.009960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.009987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.010142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.010183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.010385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.010412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.010705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.010760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.010989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.011016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.011199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.011226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.011439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.011466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.011622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.011665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.011887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.011931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.012137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.012180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.012372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.012399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.012584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.012610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.012837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.012872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.013048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.013073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.013230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.013258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.013433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.013463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.013653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.013685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.013888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.013918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.014122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.014148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.014291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.014318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.014509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.014539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.014775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.014805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.014983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.015011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.015239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.015268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.015492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.015522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.015721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.015750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.015904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.015932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.016130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.016159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.016359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.016388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.016658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.016711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.016910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.016937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.017163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.017193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.017358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.017388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.017626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.017679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.017887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.017919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.642 [2024-07-14 02:21:46.018096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.642 [2024-07-14 02:21:46.018122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.642 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.018310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.018340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.018570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.018601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.018828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.018855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.019036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.019067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.019264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.019293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.019551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.019579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.019758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.019784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.019985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.020016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.020211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.020242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.020554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.020612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.020807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.020835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.021090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.021117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.021306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.021333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.021595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.021621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.021822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.021852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.022055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.022083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.022277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.022308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.022467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.022498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.022705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.022732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.022905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.022936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.023101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.023129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.023334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.023360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.023543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.023571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.023736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.023767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.023974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.024005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.024209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.024239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.024441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.024468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.024636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.024665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.024823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.024853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.025063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.025090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.025270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.025297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.025470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.025497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.025662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.025691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.025883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.025911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.026109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.026137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.026288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.026315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.026461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.026503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.026699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.026728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.026937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.026968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.643 [2024-07-14 02:21:46.027159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.643 [2024-07-14 02:21:46.027189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.643 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.027419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.027446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.027615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.027680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.027913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.027940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.028112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.028142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.028366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.028393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.028592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.028622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.028826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.028856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.029086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.029113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.029322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.029351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.029586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.029634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.029831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.029858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.030032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.030061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.030273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.030303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.030502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.030532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.030720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.030750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.030980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.031008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.031210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.031240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.031442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.031489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.031685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.031711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.031893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.031920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.032125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.032169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.032339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.032367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.032569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.032596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.032764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.032795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.032999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.033026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.033205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.033235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.033517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.033571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.033791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.033821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.034030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.034057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.034213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.034239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.034450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.034476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.034670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.034698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.034864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.644 [2024-07-14 02:21:46.034899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.644 qpair failed and we were unable to recover it. 00:34:40.644 [2024-07-14 02:21:46.035125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.035153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.035335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.035363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.035570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.035599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.035840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.035877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.036050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.036077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.036276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.036306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.036536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.036565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.036764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.036794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.037023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.037054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.037286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.037312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.037512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.037542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.037734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.037764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.038000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.038027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.038202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.038230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.038465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.038495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.038661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.038690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.038918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.038945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.039153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.039179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.039385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.039411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.039643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.039672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.039877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.039906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.040078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.040104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.040303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.040334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.040518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.040549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.040783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.040813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.041012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.041040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.041270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.041300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.041526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.041556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.041790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.041819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.042061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.042089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.042290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.042320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.042539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.042569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.042769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.042799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.042982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.043009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.043200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.043230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.043449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.043480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.043713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.043743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.043909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.043937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.044135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.044166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.044389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.645 [2024-07-14 02:21:46.044416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.645 qpair failed and we were unable to recover it. 00:34:40.645 [2024-07-14 02:21:46.044638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.044668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.044885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.044913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.045082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.045112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.045331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.045361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.045590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.045639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.045809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.045839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.046076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.046105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.046296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.046325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.046620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.046676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.046878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.046905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.047081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.047112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.047338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.047366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.047703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.047765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.047966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.047994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.048201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.048231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.048430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.048458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.048626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.048654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.048830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.048857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.049081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.049110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.049337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.049367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.049609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.049639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.049875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.049902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.050077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.050106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.050306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.050335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.050692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.050743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.050963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.050991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.051192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.051222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.051416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.051447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.051760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.051824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.052022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.052049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.052218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.052248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.052446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.052474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.052633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.052659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.052811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.052838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.053046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.053076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.053274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.053306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.053501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.053531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.053709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.053736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.053940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.053967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.054187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.646 [2024-07-14 02:21:46.054217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.646 qpair failed and we were unable to recover it. 00:34:40.646 [2024-07-14 02:21:46.054396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.054424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.054602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.054629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.054859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.054896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.055095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.055123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.055299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.055326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.055528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.055559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.055739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.055769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.055963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.055994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.056153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.056184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.056369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.056396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.056600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.056628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.056830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.056860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.057106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.057136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.057317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.057345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.057572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.057602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.057799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.057828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.058029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.058057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.058262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.058289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.058483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.058512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.058708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.058739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.058968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.058996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.059203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.059230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.059433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.059463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.059658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.059688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.059882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.059913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.060091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.060119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.060304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.060331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.060509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.060536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.060749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.060776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.060949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.060976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.061178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.061208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.061426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.061456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.061823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.061892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.062070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.062097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.062274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.062301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.062496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.062528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.062749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.062778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.062988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.063015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.063182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.063212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.063414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.063441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.647 [2024-07-14 02:21:46.063637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.647 [2024-07-14 02:21:46.063667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.647 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.063895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.063923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.064126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.064156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.064379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.064409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.064759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.064814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.065026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.065058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.065239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.065266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.065471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.065501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.065667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.065699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.065901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.065929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.066135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.066164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.066390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.066417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.066596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.066624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.066833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.066860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.067085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.067115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.067308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.067338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.067511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.067543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.067747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.067775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.067999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.068029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.068229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.068258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.068646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.068704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.068900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.068928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.069156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.069186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.069357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.069388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.069737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.069784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.070011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.070038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.070276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.070306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.070528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.070558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.070776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.070806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.070989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.071017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.071242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.071271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.071472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.071502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.071725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.071755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.071990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.072017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.072203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.072230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.072370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.072396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.648 qpair failed and we were unable to recover it. 00:34:40.648 [2024-07-14 02:21:46.072594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.648 [2024-07-14 02:21:46.072623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.072822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.072850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.073117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.073144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.073326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.073353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.073627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.073678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.073873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.073901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.074100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.074130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.074326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.074357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.074553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.074580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.074782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.074811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.075011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.075040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.075246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.075273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.075500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.075530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.075727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.075753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.075938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.075965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.076141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.076167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.076325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.076352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.076531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.076558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.076782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.076812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.077052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.077080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.077278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.077325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.077554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.077580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.077785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.077815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.078020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.078050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.078228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.078257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.078447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.078473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.078649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.078677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.078900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.078930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.079107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.079136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.079348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.079374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.079580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.079609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.079801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.079831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.080058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.080085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.080286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.080311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.080511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.080540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.080784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.080813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.081006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.081033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.081180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.081206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.081431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.081461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.081633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.081664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.081857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.649 [2024-07-14 02:21:46.081895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.649 qpair failed and we were unable to recover it. 00:34:40.649 [2024-07-14 02:21:46.082074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.082100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.082246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.082274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.082516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.082543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.082767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.082794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.082943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.082971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.083141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.083171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.083355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.083384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.083613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.083658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.083887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.083914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.084080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.084106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.084333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.084363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.084579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.084627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.084831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.084859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.085046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.085076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.085280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.085310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.085635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.085693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.085926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.085953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.086156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.086187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.086361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.086391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.086584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.086630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.086805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.086832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.087018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.087045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.087229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.087257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.087471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.087500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.087734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.087761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.088005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.088032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.088192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.088219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.088442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.088472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.088652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.088679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.088887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.088918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.089105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.089132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.089290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.089334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.089528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.089556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.089784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.089813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.089986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.090016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.090194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.090235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.090428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.090455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.090665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.090692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.090918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.090943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.091123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.091149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.650 [2024-07-14 02:21:46.091370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.650 [2024-07-14 02:21:46.091396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.650 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.091592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.091622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.091841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.091879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.092099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.092126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.092282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.092310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.092506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.092537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.092735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.092765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.092999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.093027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.093180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.093207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.093407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.093437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.093626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.093655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.093849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.093892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.094088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.094116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.094346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.094375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.094535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.094565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.094767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.094794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.094973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.095000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.095168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.095197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.095389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.095418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.095576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.095607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.095808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.095835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.096051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.096078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.096283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.096311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.096471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.096497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.096647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.096674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.096848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.096882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.097030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.097057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.097208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.097234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.097411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.097438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.097618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.097645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.097821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.097847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.098036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.098063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.098216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.098242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.098419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.098446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.098647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.098675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.098852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.098890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.099099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.099125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.099354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.099383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.099590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.099615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.099790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.099817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.099965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.651 [2024-07-14 02:21:46.099992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.651 qpair failed and we were unable to recover it. 00:34:40.651 [2024-07-14 02:21:46.100145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.100190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.100412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.100442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.100641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.100669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.100879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.100906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.101103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.101132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.101356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.101385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.101660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.101706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.101909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.101936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.102117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.102143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.102336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.102367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.102702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.102757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.102949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.102976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.103157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.103184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.103366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.103396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.103614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.103662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.103853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.103886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.104045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.104072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.104250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.104277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.104456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.104483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.104665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.104691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.104873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.104899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.105084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.105112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.105306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.105333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.105513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.105540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.105694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.105720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.105897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.652 [2024-07-14 02:21:46.105926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.652 qpair failed and we were unable to recover it. 00:34:40.652 [2024-07-14 02:21:46.106086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.106113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.106312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.106339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.106524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.106551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.106722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.106749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.106934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.106962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.107189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.107216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.107372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.107399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.107551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.107577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.107779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.107813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.108009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.108036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.108212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.108254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.108477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.108507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.108726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.108755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.108955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.108982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.109133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.109161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.109341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.109367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.109592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.109637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.109827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.109857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.110068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.110094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.110250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.110277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.110430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.110459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.110634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.110661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.110842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.110875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.111056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.653 [2024-07-14 02:21:46.111083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.653 qpair failed and we were unable to recover it. 00:34:40.653 [2024-07-14 02:21:46.111288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.111318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.111542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.111568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.111725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.111754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.111951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.111982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.112157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.112186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.112389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.112415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.112593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.112620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.112798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.112825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.113009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.113040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.113220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.113247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.113422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.113449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.113636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.113663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.113846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.113877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.114030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.114056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.114208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.114235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.114441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.114468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.114643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.114670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.114849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.114881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.115122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.115148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.115329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.115355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.115556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.115586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.115809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.115836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.115993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.116020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.116222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.116252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.116499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.116529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.116700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.116726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.116889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.116916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.117125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.117154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.117348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.117379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.117546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.117574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.117731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.117758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.117938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.117966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.118140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.118185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.118386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.118413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.118553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.118579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.118781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.118807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.119009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.119038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.119263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.119290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.119462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.119491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.119728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.654 [2024-07-14 02:21:46.119755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.654 qpair failed and we were unable to recover it. 00:34:40.654 [2024-07-14 02:21:46.119904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.119932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.120114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.120142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.120359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.120390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.120610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.120639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.120870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.120897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.121081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.121107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.121287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.121313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.121545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.121572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.121783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.121811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.121983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.122011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.122209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.122238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.122435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.122466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.122694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.122759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.122944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.122972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.123141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.123168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.123340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.123366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.123527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.123553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.123698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.123726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.123904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.123932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.124114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.124140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.124288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.124314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.124516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.124543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.124739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.124769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.124977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.125006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.125237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.125287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.125465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.125493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.125669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.125696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.125899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.125926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.126107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.126136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.126335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.126362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.126538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.126565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.126750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.126780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.126988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.127015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.127192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.127219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.127438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.127467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.127666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.127692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.127898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.127942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.128164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.128192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.128377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.128404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.128629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.128658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.655 qpair failed and we were unable to recover it. 00:34:40.655 [2024-07-14 02:21:46.128854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.655 [2024-07-14 02:21:46.128890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.129091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.129118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.129322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.129352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.129552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.129583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.129756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.129786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.129969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.129996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.130178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.130206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.130410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.130454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.130711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.130741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.130935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.130962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.131158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.131189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.131417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.131445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.131625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.131652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.131854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.131886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.132039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.132067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.132276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.132307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.132530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.132560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.132738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.132765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.132970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.132997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.133200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.133244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.133626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.133684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.133915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.133943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.134148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.134178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.134366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.134397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.134619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.134655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.134886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.134913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.135095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.135121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.135318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.135347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.135545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.135574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.135776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.135804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.136025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.136055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.136251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.136281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.136503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.136532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.136697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.136723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.136902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.136929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.137075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.137103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.137295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.137322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.137525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.137552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.137710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.137737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.137938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.137968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.656 qpair failed and we were unable to recover it. 00:34:40.656 [2024-07-14 02:21:46.138190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.656 [2024-07-14 02:21:46.138220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.138433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.138459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.138690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.138720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.138942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.138972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.139171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.139200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.139400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.139426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.139628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.139654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.139839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.139870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.140051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.140079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.140288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.140315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.140540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.140569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.140737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.140767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.140986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.141017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.141218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.141245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.141448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.141475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.141631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.141658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.141886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.141914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.142123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.142149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.142328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.142355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.142529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.142561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.142733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.142763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.142962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.142988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.143204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.143231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.143437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.143463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.143663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.143694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.143837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.143863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.144052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.144078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.144297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.144327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.144492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.144521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.144718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.144744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.144925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.144969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.145190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.145220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.145424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.145451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.145630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.145657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.145885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.657 [2024-07-14 02:21:46.145915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.657 qpair failed and we were unable to recover it. 00:34:40.657 [2024-07-14 02:21:46.146106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.146136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.146361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.146387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.146568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.146594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.146801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.146828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.147081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.147112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.147393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.147446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.147664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.147691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.147901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.147928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.148107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.148151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.148413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.148467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.148686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.148714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.148894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.148924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.149126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.149155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.149350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.149380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.149595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.149622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.149821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.149848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.150040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.150067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.150265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.150295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.150469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.150497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.150678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.150719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.150895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.150940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.151121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.151148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.151352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.151378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.151606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.151635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.151804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.151833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.152056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.152086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.152295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.152322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.152503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.152530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.152725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.152755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.152979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.153014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.153216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.153243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.153440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.153469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.153630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.153659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.153892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.153919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.154071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.154098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.154275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.154302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.154481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.154507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.154679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.154707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.154941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.154967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.155171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.658 [2024-07-14 02:21:46.155215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.658 qpair failed and we were unable to recover it. 00:34:40.658 [2024-07-14 02:21:46.155406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.155436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.155752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.155818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.156051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.156078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.156286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.156313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.156509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.156538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.156697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.156737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.156940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.156968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.157128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.157154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.157300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.157327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.157483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.157510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.157688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.157715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.157908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.157939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.158131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.158160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.158355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.158384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.158588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.158615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.158788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.158816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.159018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.159050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.159236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.159265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.159467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.159494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.159720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.159749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.159978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.160009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.160188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.160214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.160388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.160415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.160593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.160621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.160817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.160847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.161046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.161072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.161252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.161278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.161452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.161496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.161718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.161748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.161968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.162002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.162194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.162220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.162367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.162394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.162577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.162603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.162757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.162784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.162961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.162988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.163144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.163171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.163326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.163352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.163552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.163582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.163778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.163804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.164009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.164037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.164209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.659 [2024-07-14 02:21:46.164236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.659 qpair failed and we were unable to recover it. 00:34:40.659 [2024-07-14 02:21:46.164410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.164435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.164613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.164639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.164830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.164860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.165038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.165067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.165243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.165275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.165474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.165501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.165706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.165732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.165909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.165935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.166180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.166240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.166412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.166439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.166618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.166662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.166857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.166892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.167089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.167119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.167298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.167325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.167502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.167528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.167745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.167771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.167943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.167974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.168207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.168233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.168383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.168410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.168632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.168662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.168854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.168891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.169113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.169139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.169370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.169400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.169588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.169618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.169786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.169817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.170022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.170049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.170240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.170266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.170420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.170447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.170622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.170652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.170844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.170880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.171071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.171098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.171299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.171328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.171492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.171522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.171697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.171724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.171904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.171931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.172085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.172112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.172267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.172295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.172450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.172477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.172676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.172702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.172862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.172894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.660 [2024-07-14 02:21:46.173089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.660 [2024-07-14 02:21:46.173118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.660 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.173287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.173313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.173469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.173495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.173677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.173704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.173881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.173908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.174056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.174083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.174298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.174324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.174479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.174506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.174668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.174694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.174901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.174929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.175124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.175155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.175356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.175385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.175603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.175632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.175858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.175902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.176069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.176098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.176293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.176323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.176566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.176592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.176747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.176775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.177003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.177034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.177235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.177265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.177491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.177549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.177752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.177779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.178004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.178034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.178229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.178258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.178485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.178512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.178717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.178743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.178944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.178974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.179196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.179225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.179516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.179570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.179805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.179832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.180069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.180099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.180293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.180322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.180666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.180717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.180942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.180969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.181142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.181172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.181367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.181397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.181599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.181633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.661 qpair failed and we were unable to recover it. 00:34:40.661 [2024-07-14 02:21:46.181850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.661 [2024-07-14 02:21:46.181885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.182079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.182107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.182297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.182327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.182526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.182552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.182759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.182785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.182970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.182997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.183200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.183230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.183443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.183496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.183693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.183721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.183945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.183976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.184183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.184213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.184535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.184596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.184793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.184820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.184997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.185024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.185228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.185255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.185561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.185609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.185842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.185876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.186063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.186094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.186319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.186349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.186544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.186593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.186822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.186849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.187044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.187074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.187296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.187326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.187554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.187584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.187812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.187839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.188077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.188108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.188329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.188358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.188589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.188637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.188838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.188874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.189077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.189106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.189333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.189363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.189753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.189816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.190030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.190057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.190252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.190283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.190506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.190535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.190930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.190960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.191181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.191208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.191437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.191467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.191688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.662 [2024-07-14 02:21:46.191718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.662 qpair failed and we were unable to recover it. 00:34:40.662 [2024-07-14 02:21:46.191896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.191923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.192115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.192142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.192371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.192400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.192599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.192629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.192850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.192888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.193062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.193088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.193291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.193320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.193493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.193523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.193723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.193753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.193951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.193978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.194171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.194201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.194421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.194451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.194651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.194681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.194878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.194906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.195112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.195141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.195308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.195339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.195518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.195548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.195771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.195798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.195954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.195990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.196180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.196208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.196449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.196476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.196646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.196674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.196842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.196883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.197057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.197084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.197236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.197263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.197465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.197492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.197701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.197730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.197925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.197956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.198242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.198292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.198510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.198537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.198770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.198800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.198961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.198992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.199163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.199193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.199396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.199422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.199615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.199644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.199872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.199902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.200106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.200132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.200340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.200367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.200568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.200598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.200824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.200851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.663 [2024-07-14 02:21:46.201070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.663 [2024-07-14 02:21:46.201098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.663 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.201307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.201334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.201538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.201567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.201727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.201757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.201931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.201961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.202165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.202192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.202389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.202420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.202580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.202610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.202801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.202831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.203037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.203065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.203271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.203298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.203492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.203522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.203734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.203761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.203938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.203966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.204170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.204200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.204420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.204449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.204647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.204676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.204878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.204909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.205108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.205151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.205319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.205355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.205592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.205619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.205810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.205836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.206044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.206087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.206317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.206347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.206599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.206648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.206844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.206877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.207102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.207132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.207370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.207400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.207745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.207802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.208001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.208029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.208256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.208286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.208491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.208518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.208739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.208768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.208981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.209009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.209239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.209269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.209495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.209524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.209744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.209773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.209970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.209999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.210200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.210229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.210457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.210487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.664 [2024-07-14 02:21:46.210700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.664 [2024-07-14 02:21:46.210730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.664 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.210932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.210961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.211183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.211213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.211454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.211480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.211659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.211687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.211926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.211954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.212128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.212158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.212377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.212407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.212616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.212644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.212822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.212850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.213005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.213032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.213224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.213251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.213439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.213470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.213673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.213700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.213904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.213934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.214160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.214190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.214484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.214534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.214749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.214776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.215002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.215032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.215189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.215223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.215533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.215596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.215799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.215826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.216009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.216037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.216249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.216279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.216475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.216506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.216701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.216731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.216935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.216963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.217132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.217175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.217374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.217405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.217576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.217603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.217825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.217854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.218088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.218118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.218438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.218488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.218735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.218762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.218964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.218995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.219165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.219195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.219392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.219423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.219645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.219672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.219873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.219904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.220126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.665 [2024-07-14 02:21:46.220156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.665 qpair failed and we were unable to recover it. 00:34:40.665 [2024-07-14 02:21:46.220535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.220586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.220791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.220817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.220982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.221009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.221210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.221240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.221447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.221474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.221656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.221683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.221885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.221915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.222120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.222148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.222326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.222354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.222545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.222573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.222769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.222799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.223022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.223050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.223226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.223256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.223487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.223514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.223713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.223743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.223910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.223941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.224138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.224169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.224367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.224394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.224550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.224577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.224747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.224778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.224980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.225012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.225190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.225217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.225441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.225472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.225674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.225706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.225927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.225958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.226136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.226173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.226348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.226377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.226605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.226636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.226810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.226841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.227051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.227079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.227313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.227341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.227537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.227568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.227766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.227797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.227999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.228027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.228224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.228254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.666 qpair failed and we were unable to recover it. 00:34:40.666 [2024-07-14 02:21:46.228453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.666 [2024-07-14 02:21:46.228484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.228650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.228680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.228910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.228938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.229129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.229155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.229388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.229418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.229668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.229716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.229933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.229961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.230190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.230221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.230413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.230444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.230697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.230755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.230980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.231008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.231161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.231188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.231385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.231416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.231619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.231649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.231848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.231882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.232051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.232079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.232289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.232333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.232715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.232776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.232981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.233007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.233212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.233254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.233453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.233483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.233708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.233739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.233922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.233950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.234103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.234130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.234277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.234328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.234504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.234536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.234771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.234798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.234995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.235025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.235192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.235224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.235399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.235430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.235638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.235665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.235838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.235880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.236129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.236169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.236329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.236360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.236602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.236629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.236829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.236861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.237094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.237136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.237400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.237426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.237641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.237684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.237896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.237937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.667 [2024-07-14 02:21:46.238159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.667 [2024-07-14 02:21:46.238186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.667 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.238485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.238536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.238790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.238821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.239003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.239031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.239255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.239286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.239597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.239646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.239878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.239905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.240094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.240124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.240316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.240347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.240574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.240604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.240803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.240831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.241065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.241095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.241321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.241351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.241566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.241593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.241765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.241794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.241999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.242030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.242229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.242260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.242494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.242521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.242707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.242735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.242940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.242967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.243212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.243242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.243532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.243580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.243808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.243836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.244049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.244079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.244282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.244317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.244489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.244526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.244755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.244782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.244985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.245016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.245214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.245245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.245440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.245470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.245668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.245696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.245923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.245954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.246162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.246193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.246440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.246493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.246737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.246764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.246978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.247005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.247236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.247267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.247570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.247597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.247797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.247826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.248096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.248127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.668 qpair failed and we were unable to recover it. 00:34:40.668 [2024-07-14 02:21:46.248381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.668 [2024-07-14 02:21:46.248408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.248643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.248673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.248854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.248889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.249096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.249137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.249359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.249389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.249558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.249590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.249979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.250010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.250181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.250211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.250406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.250437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.250604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.250634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.250803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.250845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.251083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.251113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.251291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.251321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.251586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.251638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.251910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.251944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.252153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.252183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.252405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.252436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.252689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.252736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.253028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.253056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.253300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.253330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.253492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.253523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.253719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.253750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.254001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.254028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.254221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.254249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.254454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.254489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.254690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.254722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.254994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.255022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.255215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.255246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.255423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.255453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.255618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.255648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.255859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.255898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.256117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.256152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.256370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.256400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.256732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.256800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.257038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.257067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.257245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.257276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.257502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.257544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.257758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.257789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.257997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.258025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.258260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.669 [2024-07-14 02:21:46.258291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.669 qpair failed and we were unable to recover it. 00:34:40.669 [2024-07-14 02:21:46.258514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.258545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.258741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.258771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.258971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.258999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.259181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.259224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.259456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.259488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.259711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.259755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.259959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.259987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.260213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.260244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.260449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.260479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.260696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.260726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.260943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.260985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.261177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.261208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.261399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.261430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.261628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.261658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.261841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.261877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.262077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.262108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.262273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.262304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.262503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.262533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.262751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.262778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.262985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.263017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.263209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.263239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.263454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.263485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.263695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.263722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.263951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.263982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.264182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.264216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.264440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.264472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.264721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.264747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.264954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.264985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.265155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.265198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.265392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.265423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.265643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.265671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.265840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.265878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.266084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.266111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.266427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.266458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.266667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.670 [2024-07-14 02:21:46.266695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.670 qpair failed and we were unable to recover it. 00:34:40.670 [2024-07-14 02:21:46.266935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.266966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.267164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.267194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.267588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.267644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.267828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.267855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.268111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.268142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.268351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.268380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.268577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.268609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.268806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.268833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.269053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.269084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.269280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.269310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.269570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.269601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.269876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.269904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.270117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.270159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.270345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.270373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.270599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.270644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.270873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.270904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.271077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.271106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.271312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.271343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.271541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.271573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.271773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.271800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.272020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.272064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.272274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.272302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.272625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.272674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.272910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.272938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.273119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.273150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.273372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.273402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.273813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.273873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.274152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.274180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.274457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.274485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.274732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.274768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.274955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.274987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.275167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.275195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.275367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.275394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.275583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.275611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.275810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.275837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.276034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.276063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.276258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.276286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.276553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.276579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.276863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.276903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.277099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.277125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.671 [2024-07-14 02:21:46.277331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.671 [2024-07-14 02:21:46.277362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.671 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.277583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.277614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.277811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.277839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.278041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.278069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.278243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.278275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.278498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.278528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.278692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.278723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.278891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.278920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.279126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.279170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.279372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.279403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.279600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.279631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.279870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.279898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.280094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.280124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.280316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.280348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.280547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.280577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.280770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.280798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.281026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.281074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.281252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.281283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.281476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.281505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.281704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.281731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.281933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.281963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.282159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.282188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.282573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.282639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.282923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.282950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.283211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.283238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.283455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.283485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.283881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.283930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.284129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.284156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.284356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.284387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.284588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.284624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.284846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.284882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.285063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.285091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.285316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.285346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.285542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.285573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.285794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.285823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.286032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.286059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.286257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.286287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.286495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.286522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.286684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.286711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.286888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.286916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.672 [2024-07-14 02:21:46.287081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.672 [2024-07-14 02:21:46.287107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.672 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.287312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.287342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.287607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.287661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.287878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.287906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.288131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.288161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.288361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.288393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.288694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.288752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.288950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.288978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.289182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.289212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.289427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.289456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.289841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.289911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.290136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.290163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.290317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.290344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.290567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.290597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.290796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.290825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.291039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.291067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.291473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.291545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.291774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.291806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.291968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.291998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.292177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.292204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.292468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.292520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.292737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.292768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.292944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.292976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.293176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.293204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.293525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.293576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.293750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.293778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.293937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.293965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.294170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.294198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.294399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.294429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.294628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.294664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.294951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.294980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.295184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.295211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.295591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.295653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.295877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.295908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.296131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.296161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.296371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.296400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.296738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.296791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.296983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.297015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.297210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.297242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.297408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.297436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.297740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.673 [2024-07-14 02:21:46.297792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.673 qpair failed and we were unable to recover it. 00:34:40.673 [2024-07-14 02:21:46.297965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.297998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.298223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.298250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.298458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.298486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.298715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.298745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.298917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.298949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.299141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.299172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.299370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.299397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.299686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.299745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.299983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.300021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.300204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.300232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.300404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.300432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.300758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.300810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.301006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.301033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.301277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.301308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.301520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.301547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.301752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.301790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.301982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.302011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.302213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.302243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.302473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.302516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.302758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.302786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.303009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.303054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.303309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.303341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.303553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.303580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.303793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.303823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.304033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.304063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.304259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.304289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.304521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.304549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.304752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.304783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.304998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.305030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.305233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.305263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.305459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.305487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.305658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.305685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.305919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.305951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.306279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.306348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.306570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.306611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.306813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.306839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.674 qpair failed and we were unable to recover it. 00:34:40.674 [2024-07-14 02:21:46.307101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.674 [2024-07-14 02:21:46.307133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.675 qpair failed and we were unable to recover it. 00:34:40.675 [2024-07-14 02:21:46.308390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.675 [2024-07-14 02:21:46.308425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.675 qpair failed and we were unable to recover it. 00:34:40.675 [2024-07-14 02:21:46.308686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.675 [2024-07-14 02:21:46.308714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.675 qpair failed and we were unable to recover it. 00:34:40.675 [2024-07-14 02:21:46.308921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.675 [2024-07-14 02:21:46.308964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.675 qpair failed and we were unable to recover it. 00:34:40.675 [2024-07-14 02:21:46.309205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.675 [2024-07-14 02:21:46.309234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.675 qpair failed and we were unable to recover it. 00:34:40.675 [2024-07-14 02:21:46.309389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.675 [2024-07-14 02:21:46.309419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.675 qpair failed and we were unable to recover it. 00:34:40.675 [2024-07-14 02:21:46.309622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.675 [2024-07-14 02:21:46.309650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.675 qpair failed and we were unable to recover it. 00:34:40.675 [2024-07-14 02:21:46.309893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.675 [2024-07-14 02:21:46.309923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.675 qpair failed and we were unable to recover it. 00:34:40.675 [2024-07-14 02:21:46.310128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.675 [2024-07-14 02:21:46.310160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.675 qpair failed and we were unable to recover it. 00:34:40.675 [2024-07-14 02:21:46.310355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.675 [2024-07-14 02:21:46.310385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.675 qpair failed and we were unable to recover it. 00:34:40.675 [2024-07-14 02:21:46.310580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.675 [2024-07-14 02:21:46.310608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.675 qpair failed and we were unable to recover it. 00:34:40.675 [2024-07-14 02:21:46.310846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.675 [2024-07-14 02:21:46.310883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.675 qpair failed and we were unable to recover it. 00:34:40.675 [2024-07-14 02:21:46.311057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.675 [2024-07-14 02:21:46.311089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.675 qpair failed and we were unable to recover it. 00:34:40.675 [2024-07-14 02:21:46.311317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.675 [2024-07-14 02:21:46.311349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.675 qpair failed and we were unable to recover it. 00:34:40.675 [2024-07-14 02:21:46.311537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.675 [2024-07-14 02:21:46.311562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.675 qpair failed and we were unable to recover it. 00:34:40.956 [2024-07-14 02:21:46.312799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.956 [2024-07-14 02:21:46.312838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.956 qpair failed and we were unable to recover it. 00:34:40.956 [2024-07-14 02:21:46.313070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.956 [2024-07-14 02:21:46.313102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.956 qpair failed and we were unable to recover it. 00:34:40.956 [2024-07-14 02:21:46.313311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.956 [2024-07-14 02:21:46.313341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.956 qpair failed and we were unable to recover it. 00:34:40.956 [2024-07-14 02:21:46.313546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.956 [2024-07-14 02:21:46.313573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.956 qpair failed and we were unable to recover it. 00:34:40.956 [2024-07-14 02:21:46.313780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.956 [2024-07-14 02:21:46.313810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.956 qpair failed and we were unable to recover it. 00:34:40.956 [2024-07-14 02:21:46.313989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.956 [2024-07-14 02:21:46.314019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.956 qpair failed and we were unable to recover it. 00:34:40.956 [2024-07-14 02:21:46.314179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.956 [2024-07-14 02:21:46.314208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.956 qpair failed and we were unable to recover it. 00:34:40.956 [2024-07-14 02:21:46.314403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.956 [2024-07-14 02:21:46.314431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.956 qpair failed and we were unable to recover it. 00:34:40.956 [2024-07-14 02:21:46.314655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.956 [2024-07-14 02:21:46.314685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.956 qpair failed and we were unable to recover it. 00:34:40.956 [2024-07-14 02:21:46.314880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.956 [2024-07-14 02:21:46.314918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.956 qpair failed and we were unable to recover it. 00:34:40.956 [2024-07-14 02:21:46.315111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.956 [2024-07-14 02:21:46.315143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.956 qpair failed and we were unable to recover it. 00:34:40.956 [2024-07-14 02:21:46.315371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.956 [2024-07-14 02:21:46.315398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.956 qpair failed and we were unable to recover it. 00:34:40.956 [2024-07-14 02:21:46.315629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.956 [2024-07-14 02:21:46.315659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.956 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.315859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.315895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.316072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.316102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.316266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.316293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.316498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.316528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.316716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.316749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.316954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.316991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.317191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.317218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.317419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.317449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.317668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.317698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.317877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.317917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.318143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.318170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.318343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.318371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.318568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.318597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.318794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.318823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.319045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.319072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.319283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.319325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.319496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.319527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.319831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.319905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.320135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.320161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.320361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.320392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.320568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.320598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.320831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.320861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.321100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.321135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.321345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.321374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.321553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.321579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.321804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.321834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.322042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.322070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.322281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.322323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.322524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.322550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.322758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.322785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.323002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.323029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.323204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.323232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.323451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.323481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.323673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.323700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.323856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.323888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.324062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.324092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.324283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.324312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.324647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.324699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.324907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.324934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.325124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.325153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.325395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.325425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.325673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.325723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.325953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.325980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.326186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.326216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.326383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.326413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.326706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.326755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.327000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.327029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.327242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.327279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.327479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.327509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.327690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.327717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.327871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.327899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.328099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.328129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.328323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.328354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.328717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.328781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.328985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.329011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.329211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.329241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.329443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.329471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.329713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.329743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.329952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.329980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.330153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.330180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.330369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.330396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.330606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.330638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.330839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.330881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.331057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.331084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.331298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.331325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.331508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.331535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.331717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.331744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.331910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.331942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.332139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.332168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.332385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.332415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.332592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.332618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.332846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.332888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.333089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.333120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.333511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.333573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.333775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.333803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.334038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.334069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.334275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.334305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.334703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.334759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.334951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.334980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.335148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.335178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.335377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.335404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.335559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.335587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.335795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.335822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.336063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.336094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.336329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.336359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1746083 Killed "${NVMF_APP[@]}" "$@" 00:34:40.957 [2024-07-14 02:21:46.336725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.336777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 [2024-07-14 02:21:46.337009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.337037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:40.957 [2024-07-14 02:21:46.337246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.957 [2024-07-14 02:21:46.337274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.957 qpair failed and we were unable to recover it. 00:34:40.957 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:40.957 [2024-07-14 02:21:46.337477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.337505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:40.958 [2024-07-14 02:21:46.337753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:40.958 [2024-07-14 02:21:46.337784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.337955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.337983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.338193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.338223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.338450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.338478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.338631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.338659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.338872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.338899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.339081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.339116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.339322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.339349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.339667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.339728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.339935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.339964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.340173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.340203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.340424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.340454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.340681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.340710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.340890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.340917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.341095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.341122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.341329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.341355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.341569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.341598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.341795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.341822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1746535 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:40.958 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1746535 00:34:40.958 [2024-07-14 02:21:46.342059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.342094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1746535 ']' 00:34:40.958 [2024-07-14 02:21:46.342316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.342345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.958 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:40.958 [2024-07-14 02:21:46.342572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.342604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.958 [2024-07-14 02:21:46.342802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.342833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:40.958 [2024-07-14 02:21:46.343012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.343039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.343198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.343225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.343400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.343430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.343657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.343684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.343917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.343947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.344183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.344212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.344389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.344419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.344642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.344668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.344898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.344929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.345124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.345155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.345378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.345408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.345631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.345659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.345834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.345861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.346048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.346075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.346257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.346284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.346490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.346516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.346671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.346699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.346900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.346928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.347104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.347133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.347311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.347344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.347508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.347536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.347741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.347769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.347946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.347973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.348151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.348179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.348380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.348407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.348564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.348592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.348745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.348773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.348975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.349003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.349184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.349212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.349391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.349417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.349620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.349647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.349800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.349828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.350018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.350045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.350254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.350285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.350469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.350496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.350652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.350677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.350854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.350888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.351076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.351103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.351287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.351314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.351463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.351489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.351697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.351723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.352006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.352047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.352245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.352272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.352431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.352457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.352634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.352660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.352832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.352858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.353020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.353046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.353209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.353235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.353489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.353514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.353697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.353725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.353891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.353917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.354072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.354098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.354294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.354320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.354497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.354523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.354704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.354730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.354914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.958 [2024-07-14 02:21:46.354940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.958 qpair failed and we were unable to recover it. 00:34:40.958 [2024-07-14 02:21:46.355108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.355133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.355338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.355363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.355504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.355530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.355680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.355706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.355863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.355901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.356057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.356083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.356264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.356290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.356498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.356524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.356704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.356730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.356915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.356941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.357121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.357147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.357323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.357349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.357550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.357576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.357726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.357751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.357933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.357959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.358107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.358132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.358336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.358361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.358538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.358564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.358822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.358849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.359015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.359040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.359212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.359238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.359525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.359550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.359759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.359785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.359995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.360021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.360203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.360231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.360409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.360434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.360616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.360642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.360850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.360885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.361058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.361084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.361255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.361288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.361513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.361549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.361758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.361801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.361982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.362012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.362199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.362235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8dfc000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.362377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370f20 is same with the state(5) to be set 00:34:40.959 [2024-07-14 02:21:46.362586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.362615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.362805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.362831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.363015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.363041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.363190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.363216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.363388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.363413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.363589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.363613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.363767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.363794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.363956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.363982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.364125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.364150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.364329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.364356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.364537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.364566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.364720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.364745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.364944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.364970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.365162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.365198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.365414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.365443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.365652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.365678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.365835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.365862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.366033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.366059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.366208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.366234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.366409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.366436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.366628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.366654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.366862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.366896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.367076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.367103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.367284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.367310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.367494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.367521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.367672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.367698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.367852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.367886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.368068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.368094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.368300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.368326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.369042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.369072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.369256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.369290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.369456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.369482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.369693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.369719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.369880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.369907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.370070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.370096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.370278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.370303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.370458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.370484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.370673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.370699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.370850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.370883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.371033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.371059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.371225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.371251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.371426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.371452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.371628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.371654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.371862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.371899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.372052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.372078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.372254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.372279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.372432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.372460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.372608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.372634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.372838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.372864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.373034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.373060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.373245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.373274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.373431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.373458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.373643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.373669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.373850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.959 [2024-07-14 02:21:46.373884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.959 qpair failed and we were unable to recover it. 00:34:40.959 [2024-07-14 02:21:46.374079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.374104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.374323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.374348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.374561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.374586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.374758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.374784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.374935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.374961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.375138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.375163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.375349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.375374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.375567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.375593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.375747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.375772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.375948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.375974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.376185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.376211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.376387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.376413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.376572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.376597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.376742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.376767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.376922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.376948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.377090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.377116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.377342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.377367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.377580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.377605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.377777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.377803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.377990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.378015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.378167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.378194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.378401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.378428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.378609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.378634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.378819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.378844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.379037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.379063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.379243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.379268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.379460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.379486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.379637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.379664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.379858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.379890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.380043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.380069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.380223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.380249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.380954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.380984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.381199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.381225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.381420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.381446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.381588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.381613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.381793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.381819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.381986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.382012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.382200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.382226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.382423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.382451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.382630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.382656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.382833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.382858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.383071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.383097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.383262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.383287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.383477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.383502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.383650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.383677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.383858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.383896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.384070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.384096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.384263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.384289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.384497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.384523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.384682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.384707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.384912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.384939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.385096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.385122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.385304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.385331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.385540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.385566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.385724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.385751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.385956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.385982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.386134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.386160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.386339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.386364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.386546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.386572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.386753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.386778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.386925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.386951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.387129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.387154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.387342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.387368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.387546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.387574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.387737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.387761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.388024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.388051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.388274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.388300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.388514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.388540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.388762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.388787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.388985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.389011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.389213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.389238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.389399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.389425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.389596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.389622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.389784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.389809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.390003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.390029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.390182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.390209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.390392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.390417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.390627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.390652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.390834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.390860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.391050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.391076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.391227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.391253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.391444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.391470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.391676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.391701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.391837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.391877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.392063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.392089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.960 [2024-07-14 02:21:46.392696] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:40.960 [2024-07-14 02:21:46.392769] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:40.960 [2024-07-14 02:21:46.392798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.960 [2024-07-14 02:21:46.392834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.960 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.393091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.393118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.393312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.393337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.393539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.393565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.393717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.393742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.393948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.393975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.394153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.394179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.394328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.394355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.394518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.394547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.394728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.394754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.394927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.394954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.395109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.395135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.395324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.395350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.396054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.396084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.396333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.396361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.396522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.396549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.396762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.396789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.397025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.397052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.397198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.397224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.397420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.397446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.397621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.397646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.397824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.397850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.398006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.398033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.398212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.398247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.398407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.398434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.398591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.398617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.398797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.398823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.398982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.399008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.399184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.399210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.399386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.399413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.399601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.399631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.399817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.399854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.400036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.400062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.400287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.400312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.400468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.400493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.400646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.400671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.400851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.400895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.401092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.401118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.401325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.401351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.401547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.401573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.401766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.401791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.401954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.401981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.402158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.402192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.402401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.402427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.402613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.402639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.402789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.402816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.402993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.403020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.403214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.403252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.403434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.403460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.403652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.403677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.403883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.403923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.404106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.404131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.404313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.404339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.404513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.404538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.404711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.404736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.404927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.404954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.405135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.405160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.405339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.405364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.405548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.405573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.405762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.405787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.405942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.405968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.406145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.406178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.406390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.406415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.406569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.406595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.406750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.406776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.406993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.407020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.407174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.407200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.407373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.407398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.407554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.407581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.407764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.407791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.407978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.408008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.408188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.408214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.408367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.408393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.408569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.408595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.408756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.408782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.408990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.409016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.409166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.409191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.409378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.409404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.409616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.409642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.409821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.409846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.410008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.410034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.410210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.410246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.410402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.410428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.410608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.410634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.410806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.410832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.411030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.411057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.411273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.411299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.411467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.411492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.411651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.411676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.411853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.411886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.412065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.412092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.412251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.961 [2024-07-14 02:21:46.412278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.961 qpair failed and we were unable to recover it. 00:34:40.961 [2024-07-14 02:21:46.412492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.412518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.412693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.412719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.412909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.412937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.413116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.413142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.413326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.413351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.413511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.413537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.413684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.413710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.413894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.413920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.414093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.414119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.414289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.414314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.414461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.414487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.414692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.414718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.414903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.414930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.415079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.415105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.415328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.415353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.415540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.415566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.415723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.415758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.415951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.415978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.416125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.416155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.416308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.416335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.416539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.416565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.416752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.416778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.416955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.416982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.417153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.417179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.417358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.417384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.417592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.417618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.417779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.417804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.417992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.418019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.418180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.418207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.418393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.418419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.418579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.418604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.418753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.418778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.418959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.418985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.419171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.419198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.419386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.419411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.419589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.419615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.419795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.419822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.420011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.420037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.420218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.420243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.420432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.420457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.420630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.420655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.420845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.420881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.421037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.421063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.421211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.421238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.421414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.421440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.421640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.421666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.421836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.421861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.422060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.422086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.422274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.422300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.422479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.422504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.422653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.422680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.422904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.422932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.423139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.423175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.423342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.423368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.423579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.423604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.423809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.423835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.423995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.424020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.424197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.424223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.424375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.424405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.424580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.424616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.424811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.424836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.425002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.425028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.425210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.425245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.425417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.425443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.425626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.425651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.425831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.425856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.426018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.426044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.426225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.426251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.426432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.426458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.426608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.426634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.426812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.426837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.427030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.427056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.427233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.427259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.427402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.427428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.427608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.427633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.427792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.427818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.428010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.428036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.428211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.428236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.428413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.428444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.428653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.428679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.428848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.962 [2024-07-14 02:21:46.428886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.962 qpair failed and we were unable to recover it. 00:34:40.962 [2024-07-14 02:21:46.429065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.429090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.429242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.429268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.429443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.429469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.429676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.429702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.429880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.429906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.430059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.430085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.430270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.430296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.430476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.430501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.430672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.430698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.430889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.430916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.431089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.431115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.431326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.431351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.431505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.431531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.431712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.431737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.431891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.431916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.432130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.432156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.432303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.432328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.432510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.432539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 EAL: No free 2048 kB hugepages reported on node 1 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.432718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.432744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.432948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.432975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.433150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.433175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.433322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.433348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.433532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.433557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.433764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.433790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.433972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.433998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.434168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.434193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.434332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.434357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.434521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.434546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.434714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.434739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.434929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.434957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.435166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.435198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.435390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.435416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.435615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.435641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.435825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.435851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.436003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.436030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.436191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.436217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.436370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.436395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.436537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.436564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.436772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.436798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.436970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.436996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.437152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.437177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.437356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.437382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.437595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.437621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.437766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.437791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.437999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.438025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.438177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.438203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.438373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.438399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.438557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.438583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.438728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.438754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.438928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.438954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.439154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.439179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.439369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.439395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.439581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.439607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.439811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.439836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.439996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.440022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.440173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.440197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.440372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.440397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.440573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.440602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.440800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.440825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.441007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.441033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.441216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.441242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.441443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.441468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.441672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.441698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.441855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.441900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.442072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.442097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.442285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.442310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.442486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.442512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.442717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.442742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.442895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.442931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.443113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.443138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.443311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.443336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.443521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.443546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.443762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.443787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.443933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.443958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.444108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.444134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.444322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.444348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.444526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.444552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.444720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.444745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.444902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.444928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.445081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.445108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.445292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.445317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.445496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.445522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.445669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.445694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.445852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.445884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.446099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.446125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.446313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.446338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.446541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.446567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.446741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.446766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.446925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.446952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.447134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.447161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.447340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.447366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.447541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.447566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.447744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.447770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.447954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.447980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.963 [2024-07-14 02:21:46.448133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.963 [2024-07-14 02:21:46.448166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.963 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.448349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.448374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.448551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.448576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.448751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.448781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.448960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.448986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.449141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.449166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.449346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.449372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.449545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.449571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.449725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.449750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.449930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.449956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.450132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.450158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.450342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.450369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.450516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.450541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.450694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.450719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.450880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.450907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.451067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.451092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.451264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.451289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.451464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.451490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.451668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.451693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.451870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.451895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.452050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.452076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.452225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.452253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.452435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.452461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.452639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.452665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.452840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.452870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.453058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.453083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.453257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.453282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.453485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.453510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.453685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.453711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.453892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.453918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.454095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.454120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.454280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.454307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.454487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.454514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.454695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.454721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.454876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.454902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.455081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.455106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.455317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.455342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.455498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.455523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.455704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.455729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.455891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.455917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.456063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.456088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.456261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.456287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.456435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.456461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.456674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.456703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.456900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.456927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.457076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.457101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.457285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.457311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.457494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.457520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.457723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.457749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.457942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.457968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.458150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.458177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.458343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.458369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.458547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.458572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.458752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.458778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.458988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.459013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.459193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.459219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.459401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.459426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.459606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.459631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.459835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.459860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.460058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.460084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.460288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.460313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.460493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.460518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.460695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.460720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.460898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.460924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.461073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.461099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.461276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.461302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.461475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.461501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.461675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.461699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.461881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.461907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.462069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.462094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.462278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.462304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.462482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.462507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.462685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.462711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.462886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.462912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.463070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.463096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.463249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.463275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.463481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.463506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.463683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.463708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.463856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.463887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.464071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.464096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.464245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.464271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.464448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.464474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.464646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.464671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.464854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.464889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.465039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.465064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.465211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.465238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.964 [2024-07-14 02:21:46.465415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.964 [2024-07-14 02:21:46.465441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.964 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.465582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.465607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.465802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.465827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.465994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.466021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.466163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.466190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.466329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.466355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.466508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.466534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.466708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.466734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.466889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.466916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.467131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.467157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.467345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.467371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.467542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.467568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.467746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.467771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.467924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.467950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 [2024-07-14 02:21:46.467955] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.468122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.468147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.468302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.468328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.468506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.468531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.468709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.468734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.468892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.468919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.469126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.469152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.469312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.469337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.469514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.469540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.469712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.469738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.469986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.470013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.470221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.470247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.470395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.470420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.470566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.470591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.470878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.470905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.471052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.471077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.471282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.471307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.471454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.471479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.471655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.471681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.471966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.471992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.472201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.472226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.472367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.472393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.472537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.472563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.472720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.472746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.472930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.472957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.473130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.473156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.473364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.473390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.473546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.473571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.473729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.473754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.473900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.473927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.474102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.474128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.474282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.474307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.474513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.474539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.474714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.474740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.474899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.474931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.475112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.475138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.475303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.475329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.475506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.475538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.475729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.475755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.475943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.475969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.476144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.476169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.476334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.476360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.476547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.476573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.476759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.476784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.477008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.477034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.477237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.477262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.477570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.477595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.477801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.477827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.478010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.478036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.478245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.478271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.478430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.478456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.478613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.478638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.478814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.478840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.479025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.479051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.479225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.479251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.479429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.479454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.479603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.479630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.479787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.479813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.480037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.480064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.480244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.480269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.480473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.480499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.480678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.480704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.480880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.480906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.481056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.481081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.481279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.481305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.481488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.481513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.481686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.481712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.481984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.482010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.482209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.482234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.482389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.482415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.482623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.482648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.482824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.482849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.483043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.483069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.483260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.483285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.483518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.483543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.483757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.483783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.965 [2024-07-14 02:21:46.483965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.965 [2024-07-14 02:21:46.483990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.965 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.484167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.484196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.484346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.484372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.484579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.484604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.484785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.484810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.484984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.485010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.485190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.485215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.485422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.485447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.485651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.485676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.485858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.485889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.486051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.486076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.486258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.486284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.486497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.486522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.486711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.486737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.486895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.486929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.487143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.487168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.487317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.487342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.487530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.487556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.487809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.487835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.487990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.488016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.488196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.488221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.488398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.488423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.488625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.488650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.488856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.488889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.489049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.489075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.489246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.489271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.489446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.489471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.489621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.489646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.489817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.489846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.490054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.490080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.490255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.490280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.490432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.490457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.490636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.490661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.490837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.490863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.491060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.491085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.491261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.491288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.491461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.491487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.491690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.491716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.491872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.491897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.492056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.492083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.492255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.492280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.492427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.492453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.492662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.492688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.492843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.492874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.493029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.493054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.493233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.493260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.493432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.493457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.493616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.493641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.493827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.493853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.494026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.494051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.494226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.494252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.494428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.494454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.494609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.494635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.494810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.494835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.495020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.495046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.495204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.495229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.495370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.495395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.495607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.495633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.495784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.495809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.495963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.495990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.496171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.496196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.496377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.496402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.496580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.496605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.496755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.496782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.496953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.496979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.497153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.497178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.497379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.497405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.497583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.497609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.497789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.497818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.497972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.497998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.498173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.498199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.498354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.498381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.498563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.498589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.498769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.498794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.498969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.498994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.499171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.499197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.499400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.499426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.499600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.499625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.499804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.499829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.500020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.500046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.500194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.500220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.500406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.500431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.500610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.500635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.500784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.500809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.500996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.966 [2024-07-14 02:21:46.501023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.966 qpair failed and we were unable to recover it. 00:34:40.966 [2024-07-14 02:21:46.501203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.501228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.501374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.501400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.501583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.501609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.501765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.501790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.501940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.501966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.502148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.502174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.502348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.502374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.502526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.502551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.502728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.502754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.502929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.502954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.503141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.503167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.503342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.503368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.503523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.503549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.503690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.503715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.503874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.503901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.504104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.504130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.504286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.504311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.504489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.504515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.504692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.504717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.504926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.504952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.505099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.505124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.505304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.505331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.505514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.505539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.505720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.505750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.505927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.505953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.506106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.506133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.506311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.506337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.506489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.506514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.506714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.506740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.506896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.506922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.507075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.507101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.507271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.507296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.507470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.507495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.507646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.507671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.507876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.507902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.508059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.508085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.508233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.508258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.508411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.508436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.508612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.508638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.508789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.508814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.508995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.509021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.509225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.509251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.509404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.509430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.509585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.509610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.509770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.509795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.509951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.509976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.510150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.510175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.510354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.510379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.510554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.510579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.510755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.510780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.510964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.510990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.511167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.511192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.511365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.511391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.511567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.511592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.511797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.511823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.512081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.512107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.512284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.512310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.512493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.512518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.512696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.512721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.512897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.512923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.513073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.513099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.513281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.513306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.513489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.513515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.513692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.513721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.513897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.513922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.514131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.514157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.514336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.514361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.514639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.514664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.514843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.514874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.515131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.515157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.515364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.515389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.515575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.515601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.515746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.515771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.516026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.516051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.516259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.516284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.516441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.516466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.516614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.516639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.516853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.516886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.517065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.517091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.517258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.517283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.517466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.517492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.517667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.517692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.517877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.517902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.518082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.518107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.518311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.518337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.518514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.518539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.518718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.518743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.518921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.518947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.519123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.519148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.519300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.519325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.519503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.519528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.519708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.519733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.519879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.519905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.967 qpair failed and we were unable to recover it. 00:34:40.967 [2024-07-14 02:21:46.520105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.967 [2024-07-14 02:21:46.520131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.520411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.520436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.520691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.520717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.520894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.520920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.521070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.521095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.521367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.521393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.521638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.521663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.521843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.521977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.522171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.522198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.522373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.522399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.522553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.522583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.522789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.522815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.522967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.522993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.523299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.523324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.523531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.523557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.523737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.523762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.523926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.523952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.524167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.524192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.524366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.524391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.524541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.524566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.524774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.524799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.524951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.524977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.525151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.525176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.525355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.525381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.525535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.525562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.525736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.525761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.525938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.525964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.526104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.526129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.526330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.526355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.526502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.526527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.526679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.526705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.526897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.526923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.527133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.527159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.527306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.527331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.527506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.527531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.527693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.527719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.527861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.527890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.528102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.528127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.528330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.528356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.528535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.528560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.528709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.528734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.528914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.528941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.529122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.529148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.529335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.529361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.529562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.529587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.529740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.529765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.529910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.529936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.530147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.530173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.530374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.530400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.530605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.530630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.530840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.530876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.531047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.531073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.531243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.531268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.531448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.531474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.531732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.531757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.531947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.531973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.532137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.532162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.532338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.532363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.532516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.532542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.532799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.532824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.533008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.533033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.533187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.533212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.533395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.533421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.533593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.533618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.533825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.533850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.534040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.534066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.534248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.534274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.534415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.534440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.534620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.534646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.534801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.534827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.535008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.535033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.535216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.535242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.535393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.535419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.535579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.535604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.535791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.535819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.535972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.535998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.536184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.536209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.536418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.536444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.536619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.536644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.536825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.536850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.537006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.537032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.537205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.537230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.537409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.537434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.537640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.537665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.537838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.537863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.538079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.538104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.538276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.538301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.538554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.538580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.538784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.538809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.968 [2024-07-14 02:21:46.538992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.968 [2024-07-14 02:21:46.539019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.968 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.539191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.539221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.539389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.539415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.539575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.539603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.539785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.539810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.539989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.540016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.540195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.540221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.540476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.540502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.540676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.540701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.540884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.540910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.541060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.541087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.541267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.541293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.541499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.541524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.541682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.541707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.541879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.541904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.542092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.542117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.542302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.542327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.542498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.542524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.542703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.542728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.542885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.542912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.543069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.543096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.543241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.543267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.543440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.543466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.543643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.543669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.543848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.543880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.544057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.544083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.544257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.544283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.544464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.544489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.544697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.544722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.544894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.544919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.545111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.545137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.545305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.545330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.545511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.545536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.545717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.545743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.545916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.545941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.546089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.546114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.546280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.546306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.546484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.546510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.546666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.546691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.546897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.546923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.547123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.547148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.547295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.547324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.547516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.547542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.547693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.547719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.547861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.547892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.548072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.548098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.548271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.548296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.548499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.548524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.548701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.548728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.548874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.548900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.549048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.549074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.549250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.549275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.549450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.549475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.549619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.549646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.549799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.549825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.550007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.550033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.550213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.550238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.550417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.550443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.550616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.550642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.550845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.550876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.551086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.551111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.551264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.551288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.551490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.551516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.551665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.551692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.551873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.551898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.552076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.552101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.552280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.552306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.552482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.552506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.552690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.552715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.552900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.552927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.553110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.553135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.553309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.553334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.553510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.553536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.553710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.553735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.553913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.553939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.554088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.554113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.554259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.554285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.554468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.554494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.554649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.554675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.554833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.554859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.555047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.555073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.555229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.555259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.555514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.555540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.555688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.555714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.555858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.555888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.556041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.556066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.556269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.556293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.556474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.969 [2024-07-14 02:21:46.556499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.969 qpair failed and we were unable to recover it. 00:34:40.969 [2024-07-14 02:21:46.556751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.556777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.556932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.556958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.557136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.557162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.557342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.557368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.557525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.557551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.557730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.557755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.557902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.557929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.558108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.558133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.558313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.558338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.558508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.558534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.558704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.558730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.558880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.558906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.559054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.559079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.559227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.559252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.559430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.559455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.559656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.559681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.559854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.559886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.560070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.560095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.560251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.560276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.560479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.560504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.560662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.560688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.560885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.560911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.561059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.561085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.561295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.561321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.561469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.561494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.561679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.561705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.561884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.561910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.562110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.562136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.562283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.562310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.562520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.562546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.562725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.562750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.562905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.562930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.563076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.563102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.563244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.563273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.563526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.563551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.563707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.563733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.563896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.563922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.564078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.564103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.564360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.564386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.564539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.564565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.564703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.564728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.564852] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:40.970 [2024-07-14 02:21:46.564895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.564904] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:40.970 [2024-07-14 02:21:46.564920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b9[2024-07-14 02:21:46.564923] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the0 with addr=10.0.0.2, port=4420 00:34:40.970 only 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.564937] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:40.970 [2024-07-14 02:21:46.564948] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:40.970 [2024-07-14 02:21:46.565006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:40.970 [2024-07-14 02:21:46.565098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.565064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:40.970 [2024-07-14 02:21:46.565122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.565036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:40.970 [2024-07-14 02:21:46.565067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:40.970 [2024-07-14 02:21:46.565296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.565320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.565580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.565606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.565780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.565805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.565957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.565983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.566248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.566274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.566442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.566467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.566610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.566635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.566779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.566806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.567089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.567116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.567307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.567333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.567488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.567514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.567666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.567692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.567861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.567892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.568062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.568088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.568395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.568421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.568568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.568594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.568744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.568771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.568949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.568976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.569127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.569152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.569300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.569326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.569498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.569524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.569774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.569799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.569979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.570005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.570158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.570184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.570336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.570362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.570536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.570562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.570737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.570763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.570933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.570963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.571147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.571175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.571368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.571394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.571556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.571582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.571728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.571753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.571932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.571958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.572163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.572188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.572336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.572361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.572503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.572528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.572690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.572715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.572950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.572976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.573135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.573161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.970 [2024-07-14 02:21:46.573338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.970 [2024-07-14 02:21:46.573364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.970 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.573581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.573607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.573772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.573798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.573975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.574002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.574184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.574209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.574353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.574378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.574556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.574583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.574860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.574890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.575053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.575079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.575229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.575254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.575392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.575417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.575565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.575591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.575737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.575762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.575937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.575963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.576149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.576175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.576356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.576382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.576532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.576557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.576698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.576724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.576900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.576926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.577092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.577118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.577295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.577320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.577582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.577607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.577755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.577780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.577935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.577961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.578136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.578162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.578332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.578357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.578613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.578639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.578825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.578850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.579025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.579055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.579332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.579357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.579512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.579540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.579759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.579784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.579939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.579965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.580123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.580149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.580322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.580347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.580498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.580524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.580663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.580688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.580913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.580939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.581114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.581140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.581284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.581309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.581565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.581590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.581749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.581775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.582022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.582047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.582190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.582216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.582370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.582395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.582544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.582571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.582763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.582789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.582962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.582988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.583138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.583164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.583345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.583371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.583628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.583653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.583817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.583842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.584027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.584052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.584210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.584235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.584405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.584431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.584577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.584603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.584752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.584778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.584964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.584990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.585168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.585194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.585354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.585380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.585562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.585588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.585752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.585778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.585954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.585980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.586257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.586283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.586439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.586464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.586610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.586635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.586816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.586842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.587004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.587029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.587223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.587253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.587444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.587470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.587615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.587640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.587798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.587824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.588000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.588026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.588173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.588198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.588450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.588476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.588655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.588680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.588825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.588850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.589063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.589089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.589242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.589272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.589448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.589473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.589637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.589663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.589807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.589832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.590017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.590043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.590198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.590223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.590398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.590423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.590570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.590596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.590744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.590769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.590917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.590943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.591146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.591172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.591328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.591353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.591506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.591531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.591690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.591717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.591907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.591933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.592095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.592121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.592294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.592319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.592473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.971 [2024-07-14 02:21:46.592499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.971 qpair failed and we were unable to recover it. 00:34:40.971 [2024-07-14 02:21:46.592712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.592737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.592900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.592926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.593103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.593128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.593273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.593299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.593489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.593514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.593684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.593709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.593848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.593879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.594029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.594054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.594212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.594238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.594393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.594418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.594597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.594623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.594793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.594819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.595012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.595042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.595211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.595236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.595399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.595425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.595571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.595597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.595748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.595774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.595933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.595960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.596107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.596134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.596298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.596323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.596500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.596526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.596676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.596702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.596960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.596986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.597135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.597161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.597314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.597339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.597491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.597517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.597682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.597707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.597888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.597913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.598080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.598105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.598265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.598292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.598469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.598494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.598650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.598675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.598837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.598863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.599063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.599089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.599234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.599259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.599449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.599475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.599729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.599754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.599907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.599933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.600077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.600103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.600258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.600283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.600437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.600464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.600643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.600669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.600845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.600875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.601050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.601075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.601252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.601278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.601429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.601454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.601607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.601633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.601784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.601810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.601987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.602013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.602154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.602179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.602329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.602354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.602525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.602551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.602742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.602771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.602927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.602953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.603129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.603155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.603335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.603361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.603516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.603543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.603691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.603715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.603904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.603930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.604096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.604121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.604287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.604314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.604472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.604497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.604691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.604716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.604863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.604892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.605073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.605098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.605291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.605317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.605468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.605494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.605654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.605679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.605850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.605881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.606054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.606079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.606222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.606247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.606427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.606452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.606708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.606733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.606920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.606945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.607125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.607151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.607330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.607355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.607610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.607635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.607781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.607806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.607973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.608000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.608144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.608170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.608361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.608386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.608566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.608591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.608754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.608780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.972 [2024-07-14 02:21:46.608928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.972 [2024-07-14 02:21:46.608953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.972 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.609121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.609146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.609335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.609360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.609537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.609562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.609732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.609757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.609918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.609943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.610091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.610116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.610401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.610426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.610629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.610654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.610817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.610846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.611009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.611035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.611213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.611238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.611416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.611441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.611617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.611642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.611845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.611909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.612110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.612136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.612334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.612359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.612550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.612577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.612839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.612872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.613155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.613181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.613361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.613387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.613535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.613560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.613707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.613732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.613888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.613915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.614081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.614106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.614305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.614330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.614502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.614527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.614667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.614692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.614887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.614913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.615113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.615138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.615330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.615355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.615531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.615556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.615704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.615731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.615910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.615936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.616089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.616115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.616414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.616439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.616620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.616647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.616790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.616815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.617077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.617104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.617279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.617305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.617457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.617482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.617642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.617668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.617842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.617874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.618047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.618073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.618326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.618351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.618518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.618543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.618716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.618741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.618911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.618937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.619090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.619116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.619322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.619351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.619530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.619556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.619726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.619751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.620015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.620041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.620179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.620205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.620391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.620416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.620567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.620593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.620846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.620876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.621044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.621071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.621276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.621301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.621553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.621578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.621787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.621813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.621973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.621999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.622149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.622175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.622329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.622355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.622533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.622558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.622729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.622754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.622933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.622959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.623120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.623145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.623302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.623327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.623506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.623531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.623670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.623695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.623836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.623862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.624025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.624050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.624223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.624248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.624403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.624428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.624600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.624625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.624775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.624801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.624978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.625004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.625180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.625205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.625367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.625392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.625592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.625617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.625757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.625783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.625932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.625958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.626107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.626132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:40.973 [2024-07-14 02:21:46.626278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.973 [2024-07-14 02:21:46.626303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:40.973 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.626446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.626473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.626628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.626654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.626846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.626879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.627026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.627052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.627238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.627267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.627536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.627561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.627761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.627787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.627963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.627989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.628141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.628167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.628467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.628492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.628674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.628700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.628840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.628871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.629021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.629046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.629203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.629230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.629407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.629432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.629616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.629641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.629837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.629863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.630071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.630098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.630256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.247 [2024-07-14 02:21:46.630282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.247 qpair failed and we were unable to recover it. 00:34:41.247 [2024-07-14 02:21:46.630425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.630450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.630601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.630628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.630796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.630821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.631089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.631115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.631263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.631288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.631465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.631491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.631647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.631672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.631841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.631873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.632067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.632092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.632236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.632262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.632433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.632458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.632601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.632626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.632785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.632810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.632960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.632986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.633188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.633213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.633361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.633386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.633549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.633574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.633737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.633762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.634000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.634026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.634218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.634243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.634396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.634421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.634622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.634647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.634802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.634827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.635021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.635046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.635209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.635234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.635373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.635402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.635553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.635578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.635722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.635747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.635926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.635952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.636126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.636151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.636312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.636338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.636512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.636538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.636714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.636739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.636918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.636944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.637092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.637118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.637270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.637294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.637460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.637486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.637771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.637796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.637973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.637998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.638281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.248 [2024-07-14 02:21:46.638307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.248 qpair failed and we were unable to recover it. 00:34:41.248 [2024-07-14 02:21:46.638481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.638507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.638650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.638675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.638850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.638887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.639058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.639083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.639267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.639292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.639472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.639497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.639657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.639683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.639856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.639887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.640066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.640092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.640253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.640278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.640456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.640481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.640620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.640644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.640803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.640831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.640983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.641010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.641189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.641213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.641363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.641387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.641538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.641563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.641709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.641733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.641894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.641920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.642090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.642115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.642292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.642317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.642467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.642492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.642693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.642718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.642874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.642900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.643047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.643072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.643274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.643303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.643459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.643483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.643634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.643658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.643837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.643862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.644017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.644043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.644199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.644223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.644374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.644399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.644572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.644597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.644738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.644763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.644941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.644966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.645145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.645170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.645317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.645342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.645485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.645510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.645685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.645711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.645872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.645898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.646090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.249 [2024-07-14 02:21:46.646115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.249 qpair failed and we were unable to recover it. 00:34:41.249 [2024-07-14 02:21:46.646261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.646287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.646473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.646498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.646678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.646703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.646848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.646892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.647053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.647078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.647241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.647265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.647439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.647464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.647611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.647635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.647836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.647860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.648030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.648054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.648211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.648238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.648421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.648446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.648592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.648616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.648821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.648845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.648998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.649024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.649176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.649202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.649383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.649408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.649575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.649600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.649779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.649804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.649981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.650006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.650169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.650193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.650336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.650361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.650527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.650552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.650692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.650717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.650895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.650921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.651068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.651093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.651269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.651293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.651456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.651480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.651638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.651663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.651848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.651887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.652074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.652101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.652264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.652289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.652468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.652493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.652665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.652690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.652850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.652883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.653065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.653090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.653246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.653271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.653445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.653470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.653651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.653677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.653825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.653849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.250 [2024-07-14 02:21:46.654028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.250 [2024-07-14 02:21:46.654053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.250 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.654223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.654248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.654404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.654430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.654597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.654622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.654769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.654793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.654965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.654991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.655145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.655171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.655350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.655375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.655527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.655552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.655698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.655723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.655893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.655918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.656061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.656089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.656270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.656296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.656470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.656495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.656647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.656671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.656849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.656879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.657031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.657055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.657235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.657260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.657424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.657448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.657595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.657620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.657794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.657819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.658009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.658035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.658212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.658236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.658387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.658412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.658566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.658591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.658741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.658766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.658940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.658965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.659110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.659134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.659309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.659334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.659489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.659514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.659705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.659730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.659876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.659901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.660060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.660085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.251 [2024-07-14 02:21:46.660267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.251 [2024-07-14 02:21:46.660292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.251 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.660473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.660498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.660674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.660717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.660893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.660922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.661101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.661128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.661288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.661315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.661495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.661521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.661695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.661721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.661880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.661907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.662061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.662088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.662238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.662264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.662413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.662440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.662653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.662679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.662827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.662854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.663024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.663050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.663235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.663261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.663407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.663433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.663598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.663624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.663806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.663837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.664011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.664038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.664240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.664266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.664444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.664471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.664628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.664654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.664822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.664849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.665027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.665052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.665204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.665229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.665398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.665422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.665609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.665634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.665781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.665806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.665956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.665984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.666160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.666186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.666384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.666410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.666620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.666646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.666796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.666822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.666994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.667020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.667183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.667209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.667401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.667427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.252 qpair failed and we were unable to recover it. 00:34:41.252 [2024-07-14 02:21:46.667573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.252 [2024-07-14 02:21:46.667599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.667774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.667800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.667982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.668010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.668214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.668240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.668391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.668417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.668601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.668628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.668803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.668829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.668987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.669014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.669189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.669214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.669364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.669388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.669532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.669557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.669747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.669773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.669948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.669973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.670153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.670178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.670323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.670347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.670495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.670520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.670689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.670714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.670878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.670904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.671092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.671116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.671269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.671295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.671445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.671470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.671640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.671668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.671843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.671883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.672077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.672101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.672250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.672276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.672452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.672477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.672624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.672649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.672830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.672855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.253 [2024-07-14 02:21:46.673035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.253 [2024-07-14 02:21:46.673060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.253 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.673205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.673230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.673407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.673432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.673576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.673600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.673781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.673805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.673956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.673982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.674124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.674148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.674331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.674355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.674535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.674561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.674717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.674742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.674912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.674938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.675105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.675130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.675278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.675303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.675473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.675498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.675670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.675694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.675893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.675918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.676081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.676106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.676249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.676274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.676439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.676463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.676636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.676661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.676874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.676900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.677049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.677074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.677240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.677264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.677413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.677438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.677590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.677616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.677797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.677822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.678030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.678055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.678199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.678225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.678405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.678430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.678613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.678638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.678795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.678819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.678972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.678998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.679187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.679212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.679387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.679416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.679563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.679588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.679764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.679789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.679967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.679993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.680170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.680194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.680367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.680392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.680529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.680553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.680740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.254 [2024-07-14 02:21:46.680766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.254 qpair failed and we were unable to recover it. 00:34:41.254 [2024-07-14 02:21:46.680956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.680982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.681127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.681152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.681336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.681362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.681512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.681536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.681711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.681735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.681893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.681919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.682177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.682202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.682354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.682380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.682582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.682607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.682783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.682807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.682974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.683000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.683159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.683184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.683353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.683377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.683557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.683581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.683726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.683751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.683928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.683954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.684126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.684150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.684321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.684345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.684520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.684545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.684724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.684749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.684945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.684971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.685142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.685167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.685375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.685399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.685556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.685580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.685743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.685768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.685927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.685953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.686100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.686125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.686300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.686325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.686502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.686527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.686678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.686702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.686893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.686919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.687096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.687121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.687292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.687321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.687495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.687520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.687687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.687711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.687889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.687915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.688059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.688084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.688257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.688282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.688428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.688452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.688599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.688624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.688781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.255 [2024-07-14 02:21:46.688821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.255 qpair failed and we were unable to recover it. 00:34:41.255 [2024-07-14 02:21:46.689023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.689052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.689256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.689283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.689428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.689454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.689649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.689674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.689821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.689848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.690043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.690069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.690226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.690251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.690429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.690455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.690630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.690654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.690809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.690835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.691044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.691070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.691235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.691260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.691437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.691461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.691610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.691634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.691810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.691835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.692023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.692049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.692193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.692219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.692421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.692446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.692596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.692621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.692787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.692813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.692967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.692992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.693143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.693168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.693352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.693377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.693548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.693572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.693760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.693784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.693957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.693982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.694126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.694151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.694327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.694352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.694524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.694548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.694718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.694742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.694931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.694956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.695130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.695159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.695308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.695332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.695480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.695505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.695700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.695725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.695885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.695911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.696086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.696110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.696253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.696277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.696439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.696464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.696641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.696666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.256 [2024-07-14 02:21:46.696840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.256 [2024-07-14 02:21:46.696864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.256 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.697022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.697046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.697237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.697261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.697432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.697457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.697616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.697642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.697877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.697902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.698047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.698071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.698237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.698262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.698423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.698448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.698598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.698626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.698769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.698795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.698958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.698985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.699166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.699190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.699357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.699383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.699553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.699578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.699729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.699754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.699906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.699932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.700106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.700130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.700273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.700298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.700445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.700469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.700639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.700663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.700811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.700835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.701012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.701038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.701199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.701224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.701391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.701416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.701556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.701581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.701732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.701758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.701936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.701961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.702112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.702137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.702288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.702313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.702474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.702499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.702635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.702664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.702843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.702874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.257 qpair failed and we were unable to recover it. 00:34:41.257 [2024-07-14 02:21:46.703049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.257 [2024-07-14 02:21:46.703075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.703224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.703248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.703392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.703416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.703573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.703598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.703773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.703797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.703948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.703973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.704156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.704181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.704364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.704388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.704550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.704574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.704719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.704743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.704891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.704916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.705100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.705126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.705277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.705302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.705450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.705474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.705651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.705675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.705820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:41.258 [2024-07-14 02:21:46.705845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.706008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.706032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.706183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:41.258 [2024-07-14 02:21:46.706211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.706385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.706411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:41.258 [2024-07-14 02:21:46.706550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.706584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.706780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.706806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:41.258 [2024-07-14 02:21:46.706953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.706981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:41.258 [2024-07-14 02:21:46.707130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.707160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.707347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.707371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.707515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.707540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.707694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.707719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.707895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.707919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.708091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.708115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.708295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.708321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.708471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.708495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.708638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.708663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.708829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.708854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.709018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.709043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.709214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.709238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.709403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.709429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.709612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.709637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.709809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.709838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.709995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.710020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.710178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.258 [2024-07-14 02:21:46.710204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.258 qpair failed and we were unable to recover it. 00:34:41.258 [2024-07-14 02:21:46.710367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.710392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.710537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.710563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.710758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.710783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.710961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.710988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.711135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.711160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.711333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.711358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.711522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.711548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.711736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.711761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.711935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.711960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.712123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.712148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.712333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.712358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.712513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.712539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.712746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.712772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.712954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.712980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.713151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.713177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.713323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.713348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.713513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.713539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.713722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.713748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.713914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.713940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.714090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.714114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.714293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.714318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.714491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.714516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.714698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.714723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.714907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.714932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e04000b90 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.715221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.715263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.715464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.715491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.715673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.715698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.715844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.715882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.716050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.716076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.716275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.716300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.716500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.716524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.716699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.716725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.716878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.716905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.717073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.717098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.717352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.717378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.717581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.717606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.717761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.717787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.717981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.718006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.718164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.718189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.259 [2024-07-14 02:21:46.718333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.259 [2024-07-14 02:21:46.718358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.259 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.718533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.718558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.718740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.718765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.718912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.718938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.719079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.719103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.719278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.719303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.719470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.719495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.719663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.719688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.719863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.719894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.720033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.720059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.720235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.720260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.720527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.720552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.720694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.720723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.720888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.720914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.721077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.721102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.721356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.721381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.721560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.721585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.721726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.721751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.721947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.721972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.722130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.722162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.722339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.722364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.722535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.722559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.722746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.722771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.722953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.722979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.723129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.723153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.723310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.723336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.723521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.723546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.723688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.723712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.723854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.723884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.724045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.724070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.724258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.724283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.724453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.724479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.724626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.724651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.724792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.724817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.724966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.724992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.725162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.725186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.725346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.725371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.725541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.725565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.725725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.725750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.725927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.725963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.260 qpair failed and we were unable to recover it. 00:34:41.260 [2024-07-14 02:21:46.726105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.260 [2024-07-14 02:21:46.726130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.726312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.726337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.726495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.726520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.726675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.726699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.726850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.726882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.727034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.727060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.727242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.727266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.727411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.727436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.727592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.727617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.727791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.727816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.727961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.727986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.728162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.728187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.728346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.728371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.728579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.728604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.728748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.728776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.728930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.728955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.729099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.729124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.729276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.729301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.729452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.729477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.729636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.729661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.729860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.729891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.730053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.730078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.730269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.730294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.730435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.730460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.730615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.730640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.730837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.730862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.731018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.731047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.731187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.731212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.731379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.731403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.731590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.731615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.731778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.731803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:41.261 [2024-07-14 02:21:46.731989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.732015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:41.261 [2024-07-14 02:21:46.732193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.732219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.732363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.732389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.261 [2024-07-14 02:21:46.732538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.732563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:41.261 [2024-07-14 02:21:46.732740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.732767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.732972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.732998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.733139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.261 [2024-07-14 02:21:46.733164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.261 qpair failed and we were unable to recover it. 00:34:41.261 [2024-07-14 02:21:46.733309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.733338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.733488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.733514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.733682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.733707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.733878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.733904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.734075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.734099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.734268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.734293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.734432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.734457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.734615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.734640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.734813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.734838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.735019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.735044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.735191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.735217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.735365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.735390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.735526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.735550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.735702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.735726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.735910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.735942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.736087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.736111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.736249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.736274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.736420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.736444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.736587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.736611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.736770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.736795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.736969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.736995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.737149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.737174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.737311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.737336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.737482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.737506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.737687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.737712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.737864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.737900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.738063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.738087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.738233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.738261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.738409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.738435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.738598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.738623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.738878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.738904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.739104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.739129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.739312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.739337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.739484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.739509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.739660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.739686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.739861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.262 [2024-07-14 02:21:46.739891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.262 qpair failed and we were unable to recover it. 00:34:41.262 [2024-07-14 02:21:46.740078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.740104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.740310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.740335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.740478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.740503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.740656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.740682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.740862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.740894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.741054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.741079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.741243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.741268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.741443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.741468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.741616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.741642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.741785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.741810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.741971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.741996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.742176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.742201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.742343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.742368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.742528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.742553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.742726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.742751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.742908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.742941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.743094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.743119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.743265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.743289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.743462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.743491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.743669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.743694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.743859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.743888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.744082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.744107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.744283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.744308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.744459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.744483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.744627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.744652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.744812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.744837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.744992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.745017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.745190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.745214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.745391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.745415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.745563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.745587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.745738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.745764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.746055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.746080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.746256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.746281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.746428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.746453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.746591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.746615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.746767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.746792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.746937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.746962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.747120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.747144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.747298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.747323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.747515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.747539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.747692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.263 [2024-07-14 02:21:46.747716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.263 qpair failed and we were unable to recover it. 00:34:41.263 [2024-07-14 02:21:46.747901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.747935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.748088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.748112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.748268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.748293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.748444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.748468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.748671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.748696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.748871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.748896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.749049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.749074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.749223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.749247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.749390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.749414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.749574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.749599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.749742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.749766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.749947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.749973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.750128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.750153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.750295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.750320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.750467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.750492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.750632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.750657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.750831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.750855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.751020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.751044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.751228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.751252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.751401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.751426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.751580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.751604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.751753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.751778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.751971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.751997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.752195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.752220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.752396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.752421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.752704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.752728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.752924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.752950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.753131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.753156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.753423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.753448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.753595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.753620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.753825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.753849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.754062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.754086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.754239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.754264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.754413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.754438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.754610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.754635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.754791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.754815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.755010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.755040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.755182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.755207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.755349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.755374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.755550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.755575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.755741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.755766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.264 qpair failed and we were unable to recover it. 00:34:41.264 [2024-07-14 02:21:46.755943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.264 [2024-07-14 02:21:46.755969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.756120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.756145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.756332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.756356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.756497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.756522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.756665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.756694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.756892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 Malloc0 00:34:41.265 [2024-07-14 02:21:46.756926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.757105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.757130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.265 [2024-07-14 02:21:46.757299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.757327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:41.265 [2024-07-14 02:21:46.757471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.757497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.265 [2024-07-14 02:21:46.757665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.757690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:41.265 [2024-07-14 02:21:46.757855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.757885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.758040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.758065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.758254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.758279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.758421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.758445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.758584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.758609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.758751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.758775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.758987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.759022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.759188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.759212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.759357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.759382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.759555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.759579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.759725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.759750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.759894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.759929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.760104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.760128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.760273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.760298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.760438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.760463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.760577] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:41.265 [2024-07-14 02:21:46.760604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.760628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.760776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.760801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.760956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.760984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.761138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.761163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.761338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.761367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.761539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.761564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.761736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.761760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.761911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.761937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.762087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.762112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.762290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.762315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.762461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.762485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.762684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.762709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.762858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.265 [2024-07-14 02:21:46.762907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.265 qpair failed and we were unable to recover it. 00:34:41.265 [2024-07-14 02:21:46.763052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.763076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.763249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.763274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.763446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.763470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.763616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.763640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.763815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.763840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.764072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.764115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.764308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.764336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.764494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.764520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.764668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.764694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.764877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.764904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.765079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.765104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8df4000b90 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.765294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.765321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.765472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.765497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.765681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.765706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.765878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.765904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.766074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.766099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.766248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.766273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.766425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.766450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.766623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.766652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.766799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.766824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.767008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.767033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.767209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.767234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.767426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.767451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.767626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.767651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.767800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.767826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.768014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.768040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.768218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.768243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.768409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.768433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.768625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.768649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 [2024-07-14 02:21:46.768802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.768827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.266 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:41.266 [2024-07-14 02:21:46.769036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.266 [2024-07-14 02:21:46.769062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.266 qpair failed and we were unable to recover it. 00:34:41.266 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.266 [2024-07-14 02:21:46.769218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.769243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:41.267 [2024-07-14 02:21:46.769408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.769433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.769583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.769608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.769743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.769768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.769941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.769966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.770142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.770167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.770336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.770361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.770510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.770534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.770690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.770715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.770857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.770887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.771029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.771054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.771203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.771228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.771371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.771395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.771574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.771598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.771752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.771777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.771933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.771958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.772102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.772126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.772304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.772329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.772503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.772527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.772726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.772750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.772929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.772955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.773126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.773151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.773320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.773345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.773510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.773534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.773705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.773730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.773885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.773910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.774076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.774105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.774282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.774307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.774448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.774472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.774646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.774670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.774839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.774864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.775045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.775070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.775237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.775261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.775430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.775455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.775601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.775626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.775792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.775817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.775964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.775989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.776152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.776177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.776351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.776375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.776546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.776570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 [2024-07-14 02:21:46.776723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.267 [2024-07-14 02:21:46.776748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.267 qpair failed and we were unable to recover it. 00:34:41.267 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.268 [2024-07-14 02:21:46.776926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.776952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:41.268 [2024-07-14 02:21:46.777098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.268 [2024-07-14 02:21:46.777124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:41.268 [2024-07-14 02:21:46.777302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.777327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.777493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.777518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.777679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.777704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.777850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.777880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.778060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.778084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.778240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.778266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.778407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.778431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.778596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.778621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.778794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.778819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.778971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.778997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.779161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.779186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.779340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.779364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.779504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.779529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.779700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.779724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.779879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.779904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.780049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.780073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.780245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.780270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.780418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.780444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.780582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.780607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.780750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.780775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.780933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.780958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.781130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.781155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.781306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.781335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.781480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.781505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.781684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.781708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.781875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.781900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.782082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.782106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.782249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.782274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.782440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.782465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.782631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.782656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.782820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.782845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.783026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.783051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.783256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.783281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.783457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.783482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.783643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.268 [2024-07-14 02:21:46.783668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.268 qpair failed and we were unable to recover it. 00:34:41.268 [2024-07-14 02:21:46.783847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.783878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.784027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.784052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.784191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.784216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.784360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.784385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.784533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.784557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.784709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.784733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.269 [2024-07-14 02:21:46.784899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.784925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:41.269 [2024-07-14 02:21:46.785071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.785096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.269 [2024-07-14 02:21:46.785233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:41.269 [2024-07-14 02:21:46.785257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.785405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.785430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.785585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.785609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.785760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.785785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.785935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.785961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.786111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.786136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.786314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.786339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.786485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.786510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.786654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.786679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.786817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.786841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.787021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.787046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.787189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.787214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.787365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.787390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.787560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.787585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.787749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.787774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.787925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.787951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.788091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.788116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.788285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.788309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.788489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.788514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.788695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.269 [2024-07-14 02:21:46.788720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362f20 with addr=10.0.0.2, port=4420 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.788800] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.269 [2024-07-14 02:21:46.791539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.269 [2024-07-14 02:21:46.791738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.269 [2024-07-14 02:21:46.791766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.269 [2024-07-14 02:21:46.791782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.269 [2024-07-14 02:21:46.791795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.269 [2024-07-14 02:21:46.791830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.269 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:41.269 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.269 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:41.269 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.269 02:21:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1746111 00:34:41.269 [2024-07-14 02:21:46.801307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.269 [2024-07-14 02:21:46.801462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.269 [2024-07-14 02:21:46.801488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.269 [2024-07-14 02:21:46.801502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.269 [2024-07-14 02:21:46.801515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.269 [2024-07-14 02:21:46.801544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.269 [2024-07-14 02:21:46.811356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.269 [2024-07-14 02:21:46.811508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.269 [2024-07-14 02:21:46.811534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.269 [2024-07-14 02:21:46.811549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.269 [2024-07-14 02:21:46.811562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.269 [2024-07-14 02:21:46.811590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.269 qpair failed and we were unable to recover it. 00:34:41.270 [2024-07-14 02:21:46.821331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.270 [2024-07-14 02:21:46.821490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.270 [2024-07-14 02:21:46.821516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.270 [2024-07-14 02:21:46.821531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.270 [2024-07-14 02:21:46.821544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.270 [2024-07-14 02:21:46.821572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.270 qpair failed and we were unable to recover it. 00:34:41.270 [2024-07-14 02:21:46.831296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.270 [2024-07-14 02:21:46.831450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.270 [2024-07-14 02:21:46.831475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.270 [2024-07-14 02:21:46.831490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.270 [2024-07-14 02:21:46.831502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.270 [2024-07-14 02:21:46.831530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.270 qpair failed and we were unable to recover it. 00:34:41.270 [2024-07-14 02:21:46.841277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.270 [2024-07-14 02:21:46.841429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.270 [2024-07-14 02:21:46.841455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.270 [2024-07-14 02:21:46.841469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.270 [2024-07-14 02:21:46.841483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.270 [2024-07-14 02:21:46.841510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.270 qpair failed and we were unable to recover it. 00:34:41.270 [2024-07-14 02:21:46.851365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.270 [2024-07-14 02:21:46.851540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.270 [2024-07-14 02:21:46.851567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.270 [2024-07-14 02:21:46.851582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.270 [2024-07-14 02:21:46.851595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.270 [2024-07-14 02:21:46.851623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.270 qpair failed and we were unable to recover it. 00:34:41.270 [2024-07-14 02:21:46.861352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.270 [2024-07-14 02:21:46.861503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.270 [2024-07-14 02:21:46.861534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.270 [2024-07-14 02:21:46.861550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.270 [2024-07-14 02:21:46.861563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.270 [2024-07-14 02:21:46.861591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.270 qpair failed and we were unable to recover it. 00:34:41.270 [2024-07-14 02:21:46.871342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.270 [2024-07-14 02:21:46.871495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.270 [2024-07-14 02:21:46.871520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.270 [2024-07-14 02:21:46.871534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.270 [2024-07-14 02:21:46.871547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.270 [2024-07-14 02:21:46.871575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.270 qpair failed and we were unable to recover it. 00:34:41.270 [2024-07-14 02:21:46.881411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.270 [2024-07-14 02:21:46.881565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.270 [2024-07-14 02:21:46.881592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.270 [2024-07-14 02:21:46.881607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.270 [2024-07-14 02:21:46.881619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.270 [2024-07-14 02:21:46.881647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.270 qpair failed and we were unable to recover it. 00:34:41.270 [2024-07-14 02:21:46.891447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.270 [2024-07-14 02:21:46.891593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.270 [2024-07-14 02:21:46.891619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.270 [2024-07-14 02:21:46.891634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.270 [2024-07-14 02:21:46.891645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.270 [2024-07-14 02:21:46.891673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.270 qpair failed and we were unable to recover it. 00:34:41.270 [2024-07-14 02:21:46.901446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.270 [2024-07-14 02:21:46.901602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.270 [2024-07-14 02:21:46.901627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.270 [2024-07-14 02:21:46.901641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.270 [2024-07-14 02:21:46.901654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.270 [2024-07-14 02:21:46.901688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.270 qpair failed and we were unable to recover it. 00:34:41.270 [2024-07-14 02:21:46.911509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.270 [2024-07-14 02:21:46.911666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.270 [2024-07-14 02:21:46.911692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.270 [2024-07-14 02:21:46.911706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.270 [2024-07-14 02:21:46.911718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.270 [2024-07-14 02:21:46.911746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.270 qpair failed and we were unable to recover it. 00:34:41.270 [2024-07-14 02:21:46.921567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.270 [2024-07-14 02:21:46.921713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.270 [2024-07-14 02:21:46.921740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.270 [2024-07-14 02:21:46.921755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.270 [2024-07-14 02:21:46.921768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.270 [2024-07-14 02:21:46.921796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.270 qpair failed and we were unable to recover it. 00:34:41.533 [2024-07-14 02:21:46.931616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.533 [2024-07-14 02:21:46.931828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.533 [2024-07-14 02:21:46.931855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.533 [2024-07-14 02:21:46.931876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.533 [2024-07-14 02:21:46.931890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.533 [2024-07-14 02:21:46.931918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.533 qpair failed and we were unable to recover it. 00:34:41.533 [2024-07-14 02:21:46.941660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.533 [2024-07-14 02:21:46.941815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.534 [2024-07-14 02:21:46.941840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.534 [2024-07-14 02:21:46.941855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.534 [2024-07-14 02:21:46.941874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.534 [2024-07-14 02:21:46.941904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.534 qpair failed and we were unable to recover it. 00:34:41.534 [2024-07-14 02:21:46.951730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.534 [2024-07-14 02:21:46.951900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.534 [2024-07-14 02:21:46.951931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.534 [2024-07-14 02:21:46.951947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.534 [2024-07-14 02:21:46.951960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.534 [2024-07-14 02:21:46.951988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.534 qpair failed and we were unable to recover it. 00:34:41.534 [2024-07-14 02:21:46.961639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.534 [2024-07-14 02:21:46.961787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.534 [2024-07-14 02:21:46.961813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.534 [2024-07-14 02:21:46.961827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.534 [2024-07-14 02:21:46.961840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.534 [2024-07-14 02:21:46.961874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.534 qpair failed and we were unable to recover it. 00:34:41.534 [2024-07-14 02:21:46.971660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.534 [2024-07-14 02:21:46.971813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.534 [2024-07-14 02:21:46.971839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.534 [2024-07-14 02:21:46.971854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.534 [2024-07-14 02:21:46.971909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.534 [2024-07-14 02:21:46.971947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.534 qpair failed and we were unable to recover it. 00:34:41.534 [2024-07-14 02:21:46.981673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.534 [2024-07-14 02:21:46.981824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.534 [2024-07-14 02:21:46.981850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.534 [2024-07-14 02:21:46.981870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.534 [2024-07-14 02:21:46.981885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.534 [2024-07-14 02:21:46.981913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.534 qpair failed and we were unable to recover it. 00:34:41.534 [2024-07-14 02:21:46.991726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.534 [2024-07-14 02:21:46.991876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.534 [2024-07-14 02:21:46.991902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.534 [2024-07-14 02:21:46.991916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.534 [2024-07-14 02:21:46.991929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.534 [2024-07-14 02:21:46.991966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.534 qpair failed and we were unable to recover it. 00:34:41.534 [2024-07-14 02:21:47.001783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.534 [2024-07-14 02:21:47.001931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.534 [2024-07-14 02:21:47.001957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.534 [2024-07-14 02:21:47.001972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.534 [2024-07-14 02:21:47.001985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.534 [2024-07-14 02:21:47.002013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.534 qpair failed and we were unable to recover it. 00:34:41.534 [2024-07-14 02:21:47.011821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.534 [2024-07-14 02:21:47.011981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.534 [2024-07-14 02:21:47.012007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.534 [2024-07-14 02:21:47.012022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.534 [2024-07-14 02:21:47.012035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.534 [2024-07-14 02:21:47.012063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.534 qpair failed and we were unable to recover it. 00:34:41.534 [2024-07-14 02:21:47.021798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.534 [2024-07-14 02:21:47.021961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.534 [2024-07-14 02:21:47.021987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.534 [2024-07-14 02:21:47.022002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.534 [2024-07-14 02:21:47.022015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.534 [2024-07-14 02:21:47.022043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.534 qpair failed and we were unable to recover it. 00:34:41.534 [2024-07-14 02:21:47.031850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.534 [2024-07-14 02:21:47.032011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.534 [2024-07-14 02:21:47.032037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.534 [2024-07-14 02:21:47.032052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.534 [2024-07-14 02:21:47.032065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.534 [2024-07-14 02:21:47.032094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.534 qpair failed and we were unable to recover it. 00:34:41.534 [2024-07-14 02:21:47.041887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.534 [2024-07-14 02:21:47.042043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.534 [2024-07-14 02:21:47.042073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.534 [2024-07-14 02:21:47.042088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.534 [2024-07-14 02:21:47.042101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.534 [2024-07-14 02:21:47.042129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.534 qpair failed and we were unable to recover it. 00:34:41.534 [2024-07-14 02:21:47.051881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.534 [2024-07-14 02:21:47.052025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.534 [2024-07-14 02:21:47.052051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.534 [2024-07-14 02:21:47.052066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.534 [2024-07-14 02:21:47.052078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.534 [2024-07-14 02:21:47.052106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.534 qpair failed and we were unable to recover it. 00:34:41.534 [2024-07-14 02:21:47.061933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.534 [2024-07-14 02:21:47.062106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.534 [2024-07-14 02:21:47.062131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.534 [2024-07-14 02:21:47.062145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.534 [2024-07-14 02:21:47.062159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.534 [2024-07-14 02:21:47.062186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.534 qpair failed and we were unable to recover it. 00:34:41.534 [2024-07-14 02:21:47.071947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.534 [2024-07-14 02:21:47.072099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.534 [2024-07-14 02:21:47.072124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.534 [2024-07-14 02:21:47.072138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.534 [2024-07-14 02:21:47.072151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.534 [2024-07-14 02:21:47.072179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.534 qpair failed and we were unable to recover it. 00:34:41.534 [2024-07-14 02:21:47.082058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.534 [2024-07-14 02:21:47.082204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.534 [2024-07-14 02:21:47.082230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.534 [2024-07-14 02:21:47.082245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.534 [2024-07-14 02:21:47.082257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.534 [2024-07-14 02:21:47.082291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.535 qpair failed and we were unable to recover it. 00:34:41.535 [2024-07-14 02:21:47.092019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.535 [2024-07-14 02:21:47.092170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.535 [2024-07-14 02:21:47.092195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.535 [2024-07-14 02:21:47.092210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.535 [2024-07-14 02:21:47.092223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.535 [2024-07-14 02:21:47.092250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.535 qpair failed and we were unable to recover it. 00:34:41.535 [2024-07-14 02:21:47.102016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.535 [2024-07-14 02:21:47.102169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.535 [2024-07-14 02:21:47.102195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.535 [2024-07-14 02:21:47.102210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.535 [2024-07-14 02:21:47.102223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.535 [2024-07-14 02:21:47.102251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.535 qpair failed and we were unable to recover it. 00:34:41.535 [2024-07-14 02:21:47.112043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.535 [2024-07-14 02:21:47.112193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.535 [2024-07-14 02:21:47.112219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.535 [2024-07-14 02:21:47.112234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.535 [2024-07-14 02:21:47.112247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.535 [2024-07-14 02:21:47.112275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.535 qpair failed and we were unable to recover it. 00:34:41.535 [2024-07-14 02:21:47.122081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.535 [2024-07-14 02:21:47.122233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.535 [2024-07-14 02:21:47.122259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.535 [2024-07-14 02:21:47.122275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.535 [2024-07-14 02:21:47.122288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.535 [2024-07-14 02:21:47.122316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.535 qpair failed and we were unable to recover it. 00:34:41.535 [2024-07-14 02:21:47.132111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.535 [2024-07-14 02:21:47.132267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.535 [2024-07-14 02:21:47.132298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.535 [2024-07-14 02:21:47.132313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.535 [2024-07-14 02:21:47.132327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.535 [2024-07-14 02:21:47.132355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.535 qpair failed and we were unable to recover it. 00:34:41.535 [2024-07-14 02:21:47.142170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.535 [2024-07-14 02:21:47.142327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.535 [2024-07-14 02:21:47.142353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.535 [2024-07-14 02:21:47.142367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.535 [2024-07-14 02:21:47.142380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.535 [2024-07-14 02:21:47.142408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.535 qpair failed and we were unable to recover it. 00:34:41.535 [2024-07-14 02:21:47.152299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.535 [2024-07-14 02:21:47.152451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.535 [2024-07-14 02:21:47.152476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.535 [2024-07-14 02:21:47.152490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.535 [2024-07-14 02:21:47.152503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.535 [2024-07-14 02:21:47.152531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.535 qpair failed and we were unable to recover it. 00:34:41.535 [2024-07-14 02:21:47.162205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.535 [2024-07-14 02:21:47.162349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.535 [2024-07-14 02:21:47.162375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.535 [2024-07-14 02:21:47.162389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.535 [2024-07-14 02:21:47.162402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.535 [2024-07-14 02:21:47.162430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.535 qpair failed and we were unable to recover it. 00:34:41.535 [2024-07-14 02:21:47.172218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.535 [2024-07-14 02:21:47.172383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.535 [2024-07-14 02:21:47.172408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.535 [2024-07-14 02:21:47.172422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.535 [2024-07-14 02:21:47.172440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.535 [2024-07-14 02:21:47.172469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.535 qpair failed and we were unable to recover it. 00:34:41.535 [2024-07-14 02:21:47.182252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.535 [2024-07-14 02:21:47.182403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.535 [2024-07-14 02:21:47.182429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.535 [2024-07-14 02:21:47.182443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.535 [2024-07-14 02:21:47.182456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.535 [2024-07-14 02:21:47.182484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.535 qpair failed and we were unable to recover it. 00:34:41.535 [2024-07-14 02:21:47.192291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.535 [2024-07-14 02:21:47.192444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.535 [2024-07-14 02:21:47.192469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.535 [2024-07-14 02:21:47.192483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.535 [2024-07-14 02:21:47.192496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.535 [2024-07-14 02:21:47.192524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.535 qpair failed and we were unable to recover it. 00:34:41.535 [2024-07-14 02:21:47.202338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.535 [2024-07-14 02:21:47.202548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.535 [2024-07-14 02:21:47.202573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.535 [2024-07-14 02:21:47.202587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.535 [2024-07-14 02:21:47.202600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.535 [2024-07-14 02:21:47.202628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.535 qpair failed and we were unable to recover it. 00:34:41.535 [2024-07-14 02:21:47.212367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.535 [2024-07-14 02:21:47.212520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.535 [2024-07-14 02:21:47.212546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.535 [2024-07-14 02:21:47.212560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.535 [2024-07-14 02:21:47.212573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.535 [2024-07-14 02:21:47.212600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.535 qpair failed and we were unable to recover it. 00:34:41.535 [2024-07-14 02:21:47.222396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.535 [2024-07-14 02:21:47.222557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.535 [2024-07-14 02:21:47.222583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.535 [2024-07-14 02:21:47.222597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.535 [2024-07-14 02:21:47.222610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.535 [2024-07-14 02:21:47.222638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.535 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-14 02:21:47.232403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.794 [2024-07-14 02:21:47.232552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.794 [2024-07-14 02:21:47.232577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.794 [2024-07-14 02:21:47.232592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.794 [2024-07-14 02:21:47.232605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.794 [2024-07-14 02:21:47.232633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-14 02:21:47.242484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.794 [2024-07-14 02:21:47.242678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.794 [2024-07-14 02:21:47.242704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.794 [2024-07-14 02:21:47.242718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.794 [2024-07-14 02:21:47.242731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.794 [2024-07-14 02:21:47.242759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-14 02:21:47.252469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.794 [2024-07-14 02:21:47.252614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.794 [2024-07-14 02:21:47.252640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.794 [2024-07-14 02:21:47.252655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.794 [2024-07-14 02:21:47.252667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.794 [2024-07-14 02:21:47.252695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-14 02:21:47.262478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.794 [2024-07-14 02:21:47.262628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.794 [2024-07-14 02:21:47.262654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.794 [2024-07-14 02:21:47.262668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.794 [2024-07-14 02:21:47.262686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.794 [2024-07-14 02:21:47.262715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-14 02:21:47.272521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.795 [2024-07-14 02:21:47.272706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.795 [2024-07-14 02:21:47.272731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.795 [2024-07-14 02:21:47.272745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.795 [2024-07-14 02:21:47.272759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.795 [2024-07-14 02:21:47.272786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-14 02:21:47.282562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.795 [2024-07-14 02:21:47.282707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.795 [2024-07-14 02:21:47.282733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.795 [2024-07-14 02:21:47.282747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.795 [2024-07-14 02:21:47.282761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.795 [2024-07-14 02:21:47.282788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-14 02:21:47.292586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.795 [2024-07-14 02:21:47.292733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.795 [2024-07-14 02:21:47.292757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.795 [2024-07-14 02:21:47.292772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.795 [2024-07-14 02:21:47.292785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.795 [2024-07-14 02:21:47.292813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-14 02:21:47.302629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.795 [2024-07-14 02:21:47.302787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.795 [2024-07-14 02:21:47.302812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.795 [2024-07-14 02:21:47.302827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.795 [2024-07-14 02:21:47.302839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.795 [2024-07-14 02:21:47.302876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-14 02:21:47.312646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.795 [2024-07-14 02:21:47.312805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.795 [2024-07-14 02:21:47.312831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.795 [2024-07-14 02:21:47.312846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.795 [2024-07-14 02:21:47.312858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.795 [2024-07-14 02:21:47.312894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-14 02:21:47.322672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.795 [2024-07-14 02:21:47.322821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.795 [2024-07-14 02:21:47.322846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.795 [2024-07-14 02:21:47.322860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.795 [2024-07-14 02:21:47.322884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.795 [2024-07-14 02:21:47.322914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-14 02:21:47.332779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.795 [2024-07-14 02:21:47.332936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.795 [2024-07-14 02:21:47.332964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.795 [2024-07-14 02:21:47.332982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.795 [2024-07-14 02:21:47.332996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.795 [2024-07-14 02:21:47.333025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-14 02:21:47.342778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.795 [2024-07-14 02:21:47.342948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.795 [2024-07-14 02:21:47.342974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.795 [2024-07-14 02:21:47.342989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.795 [2024-07-14 02:21:47.343002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.795 [2024-07-14 02:21:47.343030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-14 02:21:47.352764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.795 [2024-07-14 02:21:47.352908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.795 [2024-07-14 02:21:47.352934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.795 [2024-07-14 02:21:47.352953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.795 [2024-07-14 02:21:47.352966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.795 [2024-07-14 02:21:47.352994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-14 02:21:47.362804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.795 [2024-07-14 02:21:47.362960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.795 [2024-07-14 02:21:47.362987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.795 [2024-07-14 02:21:47.363001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.795 [2024-07-14 02:21:47.363014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.795 [2024-07-14 02:21:47.363042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-14 02:21:47.372834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.795 [2024-07-14 02:21:47.372994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.795 [2024-07-14 02:21:47.373021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.795 [2024-07-14 02:21:47.373041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.795 [2024-07-14 02:21:47.373054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.795 [2024-07-14 02:21:47.373083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-14 02:21:47.382933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.795 [2024-07-14 02:21:47.383083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.795 [2024-07-14 02:21:47.383109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.795 [2024-07-14 02:21:47.383124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.795 [2024-07-14 02:21:47.383137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.795 [2024-07-14 02:21:47.383165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-14 02:21:47.392893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.795 [2024-07-14 02:21:47.393043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.795 [2024-07-14 02:21:47.393068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.795 [2024-07-14 02:21:47.393082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.795 [2024-07-14 02:21:47.393093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.795 [2024-07-14 02:21:47.393122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-14 02:21:47.402924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.795 [2024-07-14 02:21:47.403097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.795 [2024-07-14 02:21:47.403122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.795 [2024-07-14 02:21:47.403136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.795 [2024-07-14 02:21:47.403149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.795 [2024-07-14 02:21:47.403178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-14 02:21:47.412938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.795 [2024-07-14 02:21:47.413084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.795 [2024-07-14 02:21:47.413109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.795 [2024-07-14 02:21:47.413124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.796 [2024-07-14 02:21:47.413137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.796 [2024-07-14 02:21:47.413164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-14 02:21:47.423002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.796 [2024-07-14 02:21:47.423165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.796 [2024-07-14 02:21:47.423190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.796 [2024-07-14 02:21:47.423204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.796 [2024-07-14 02:21:47.423217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.796 [2024-07-14 02:21:47.423244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-14 02:21:47.433008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.796 [2024-07-14 02:21:47.433155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.796 [2024-07-14 02:21:47.433180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.796 [2024-07-14 02:21:47.433194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.796 [2024-07-14 02:21:47.433206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.796 [2024-07-14 02:21:47.433234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-14 02:21:47.443117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.796 [2024-07-14 02:21:47.443264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.796 [2024-07-14 02:21:47.443290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.796 [2024-07-14 02:21:47.443310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.796 [2024-07-14 02:21:47.443324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.796 [2024-07-14 02:21:47.443352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-14 02:21:47.453055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.796 [2024-07-14 02:21:47.453203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.796 [2024-07-14 02:21:47.453228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.796 [2024-07-14 02:21:47.453242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.796 [2024-07-14 02:21:47.453255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.796 [2024-07-14 02:21:47.453283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-14 02:21:47.463083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.796 [2024-07-14 02:21:47.463235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.796 [2024-07-14 02:21:47.463260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.796 [2024-07-14 02:21:47.463274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.796 [2024-07-14 02:21:47.463287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.796 [2024-07-14 02:21:47.463315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-14 02:21:47.473094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.796 [2024-07-14 02:21:47.473253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.796 [2024-07-14 02:21:47.473279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.796 [2024-07-14 02:21:47.473293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.796 [2024-07-14 02:21:47.473306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.796 [2024-07-14 02:21:47.473334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-14 02:21:47.483178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.796 [2024-07-14 02:21:47.483328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.796 [2024-07-14 02:21:47.483354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.796 [2024-07-14 02:21:47.483369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.796 [2024-07-14 02:21:47.483382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:41.796 [2024-07-14 02:21:47.483410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.796 qpair failed and we were unable to recover it. 00:34:42.053 [2024-07-14 02:21:47.493229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.053 [2024-07-14 02:21:47.493378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.053 [2024-07-14 02:21:47.493404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.053 [2024-07-14 02:21:47.493418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.053 [2024-07-14 02:21:47.493431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.053 [2024-07-14 02:21:47.493460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.053 qpair failed and we were unable to recover it. 00:34:42.053 [2024-07-14 02:21:47.503270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.053 [2024-07-14 02:21:47.503459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.053 [2024-07-14 02:21:47.503484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.053 [2024-07-14 02:21:47.503499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.053 [2024-07-14 02:21:47.503512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.053 [2024-07-14 02:21:47.503539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.053 qpair failed and we were unable to recover it. 00:34:42.053 [2024-07-14 02:21:47.513216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.053 [2024-07-14 02:21:47.513363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.053 [2024-07-14 02:21:47.513389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.053 [2024-07-14 02:21:47.513403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.053 [2024-07-14 02:21:47.513416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.053 [2024-07-14 02:21:47.513444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.053 qpair failed and we were unable to recover it. 00:34:42.053 [2024-07-14 02:21:47.523246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.053 [2024-07-14 02:21:47.523391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.053 [2024-07-14 02:21:47.523418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.053 [2024-07-14 02:21:47.523432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.054 [2024-07-14 02:21:47.523445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.054 [2024-07-14 02:21:47.523473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.054 qpair failed and we were unable to recover it. 00:34:42.054 [2024-07-14 02:21:47.533291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.054 [2024-07-14 02:21:47.533464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.054 [2024-07-14 02:21:47.533489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.054 [2024-07-14 02:21:47.533510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.054 [2024-07-14 02:21:47.533524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.054 [2024-07-14 02:21:47.533554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.054 qpair failed and we were unable to recover it. 00:34:42.054 [2024-07-14 02:21:47.543297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.054 [2024-07-14 02:21:47.543474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.054 [2024-07-14 02:21:47.543500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.054 [2024-07-14 02:21:47.543514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.054 [2024-07-14 02:21:47.543527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.054 [2024-07-14 02:21:47.543555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.054 qpair failed and we were unable to recover it. 00:34:42.054 [2024-07-14 02:21:47.553354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.054 [2024-07-14 02:21:47.553504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.054 [2024-07-14 02:21:47.553529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.054 [2024-07-14 02:21:47.553543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.054 [2024-07-14 02:21:47.553556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.054 [2024-07-14 02:21:47.553584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.054 qpair failed and we were unable to recover it. 00:34:42.054 [2024-07-14 02:21:47.563376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.054 [2024-07-14 02:21:47.563580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.054 [2024-07-14 02:21:47.563605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.054 [2024-07-14 02:21:47.563619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.054 [2024-07-14 02:21:47.563632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.054 [2024-07-14 02:21:47.563659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.054 qpair failed and we were unable to recover it. 00:34:42.054 [2024-07-14 02:21:47.573458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.054 [2024-07-14 02:21:47.573602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.054 [2024-07-14 02:21:47.573627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.054 [2024-07-14 02:21:47.573641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.054 [2024-07-14 02:21:47.573654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.054 [2024-07-14 02:21:47.573682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.054 qpair failed and we were unable to recover it. 00:34:42.054 [2024-07-14 02:21:47.583572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.054 [2024-07-14 02:21:47.583778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.054 [2024-07-14 02:21:47.583803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.054 [2024-07-14 02:21:47.583817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.054 [2024-07-14 02:21:47.583830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.054 [2024-07-14 02:21:47.583858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.054 qpair failed and we were unable to recover it. 00:34:42.054 [2024-07-14 02:21:47.593507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.054 [2024-07-14 02:21:47.593649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.054 [2024-07-14 02:21:47.593674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.054 [2024-07-14 02:21:47.593688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.054 [2024-07-14 02:21:47.593701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.054 [2024-07-14 02:21:47.593728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.054 qpair failed and we were unable to recover it. 00:34:42.054 [2024-07-14 02:21:47.603535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.054 [2024-07-14 02:21:47.603686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.054 [2024-07-14 02:21:47.603712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.054 [2024-07-14 02:21:47.603727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.054 [2024-07-14 02:21:47.603740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.054 [2024-07-14 02:21:47.603768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.054 qpair failed and we were unable to recover it. 00:34:42.054 [2024-07-14 02:21:47.613518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.054 [2024-07-14 02:21:47.613663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.054 [2024-07-14 02:21:47.613689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.054 [2024-07-14 02:21:47.613703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.054 [2024-07-14 02:21:47.613716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.054 [2024-07-14 02:21:47.613744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.054 qpair failed and we were unable to recover it. 00:34:42.054 [2024-07-14 02:21:47.623540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.054 [2024-07-14 02:21:47.623688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.054 [2024-07-14 02:21:47.623717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.054 [2024-07-14 02:21:47.623733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.054 [2024-07-14 02:21:47.623745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.054 [2024-07-14 02:21:47.623773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.054 qpair failed and we were unable to recover it. 00:34:42.054 [2024-07-14 02:21:47.633603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.054 [2024-07-14 02:21:47.633755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.054 [2024-07-14 02:21:47.633780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.054 [2024-07-14 02:21:47.633794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.054 [2024-07-14 02:21:47.633806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.054 [2024-07-14 02:21:47.633834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.054 qpair failed and we were unable to recover it. 00:34:42.054 [2024-07-14 02:21:47.643579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.054 [2024-07-14 02:21:47.643722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.054 [2024-07-14 02:21:47.643747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.054 [2024-07-14 02:21:47.643761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.054 [2024-07-14 02:21:47.643774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.054 [2024-07-14 02:21:47.643801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.054 qpair failed and we were unable to recover it. 00:34:42.054 [2024-07-14 02:21:47.653610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.054 [2024-07-14 02:21:47.653757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.054 [2024-07-14 02:21:47.653782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.054 [2024-07-14 02:21:47.653797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.054 [2024-07-14 02:21:47.653810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.054 [2024-07-14 02:21:47.653837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.054 qpair failed and we were unable to recover it. 00:34:42.054 [2024-07-14 02:21:47.663639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.055 [2024-07-14 02:21:47.663792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.055 [2024-07-14 02:21:47.663817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.055 [2024-07-14 02:21:47.663832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.055 [2024-07-14 02:21:47.663845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.055 [2024-07-14 02:21:47.663878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.055 qpair failed and we were unable to recover it. 00:34:42.055 [2024-07-14 02:21:47.673682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.055 [2024-07-14 02:21:47.673828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.055 [2024-07-14 02:21:47.673853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.055 [2024-07-14 02:21:47.673875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.055 [2024-07-14 02:21:47.673890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.055 [2024-07-14 02:21:47.673918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.055 qpair failed and we were unable to recover it. 00:34:42.055 [2024-07-14 02:21:47.683676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.055 [2024-07-14 02:21:47.683831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.055 [2024-07-14 02:21:47.683856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.055 [2024-07-14 02:21:47.683878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.055 [2024-07-14 02:21:47.683893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.055 [2024-07-14 02:21:47.683921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.055 qpair failed and we were unable to recover it. 00:34:42.055 [2024-07-14 02:21:47.693707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.055 [2024-07-14 02:21:47.693883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.055 [2024-07-14 02:21:47.693908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.055 [2024-07-14 02:21:47.693923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.055 [2024-07-14 02:21:47.693935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.055 [2024-07-14 02:21:47.693963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.055 qpair failed and we were unable to recover it. 00:34:42.055 [2024-07-14 02:21:47.703757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.055 [2024-07-14 02:21:47.703911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.055 [2024-07-14 02:21:47.703937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.055 [2024-07-14 02:21:47.703951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.055 [2024-07-14 02:21:47.703964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.055 [2024-07-14 02:21:47.703992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.055 qpair failed and we were unable to recover it. 00:34:42.055 [2024-07-14 02:21:47.713910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.055 [2024-07-14 02:21:47.714063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.055 [2024-07-14 02:21:47.714093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.055 [2024-07-14 02:21:47.714108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.055 [2024-07-14 02:21:47.714121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.055 [2024-07-14 02:21:47.714149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.055 qpair failed and we were unable to recover it. 00:34:42.055 [2024-07-14 02:21:47.723824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.055 [2024-07-14 02:21:47.723982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.055 [2024-07-14 02:21:47.724007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.055 [2024-07-14 02:21:47.724022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.055 [2024-07-14 02:21:47.724035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.055 [2024-07-14 02:21:47.724063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.055 qpair failed and we were unable to recover it. 00:34:42.055 [2024-07-14 02:21:47.733842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.055 [2024-07-14 02:21:47.734005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.055 [2024-07-14 02:21:47.734030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.055 [2024-07-14 02:21:47.734044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.055 [2024-07-14 02:21:47.734055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.055 [2024-07-14 02:21:47.734084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.055 qpair failed and we were unable to recover it. 00:34:42.055 [2024-07-14 02:21:47.743928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.055 [2024-07-14 02:21:47.744090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.055 [2024-07-14 02:21:47.744116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.055 [2024-07-14 02:21:47.744131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.055 [2024-07-14 02:21:47.744144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.055 [2024-07-14 02:21:47.744173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.055 qpair failed and we were unable to recover it. 00:34:42.313 [2024-07-14 02:21:47.753896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.313 [2024-07-14 02:21:47.754092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.313 [2024-07-14 02:21:47.754117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.313 [2024-07-14 02:21:47.754132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.313 [2024-07-14 02:21:47.754145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.313 [2024-07-14 02:21:47.754183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.313 qpair failed and we were unable to recover it. 00:34:42.313 [2024-07-14 02:21:47.763972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.313 [2024-07-14 02:21:47.764174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.313 [2024-07-14 02:21:47.764199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.313 [2024-07-14 02:21:47.764213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.313 [2024-07-14 02:21:47.764226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.313 [2024-07-14 02:21:47.764255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.313 qpair failed and we were unable to recover it. 00:34:42.313 [2024-07-14 02:21:47.773961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.313 [2024-07-14 02:21:47.774108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.313 [2024-07-14 02:21:47.774133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.313 [2024-07-14 02:21:47.774147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.313 [2024-07-14 02:21:47.774159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.313 [2024-07-14 02:21:47.774187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.313 qpair failed and we were unable to recover it. 00:34:42.313 [2024-07-14 02:21:47.783990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.313 [2024-07-14 02:21:47.784154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.313 [2024-07-14 02:21:47.784180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.313 [2024-07-14 02:21:47.784194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.313 [2024-07-14 02:21:47.784207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.313 [2024-07-14 02:21:47.784235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.313 qpair failed and we were unable to recover it. 00:34:42.313 [2024-07-14 02:21:47.794036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.313 [2024-07-14 02:21:47.794180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.313 [2024-07-14 02:21:47.794205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.313 [2024-07-14 02:21:47.794219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.313 [2024-07-14 02:21:47.794232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.313 [2024-07-14 02:21:47.794260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.313 qpair failed and we were unable to recover it. 00:34:42.313 [2024-07-14 02:21:47.804068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.313 [2024-07-14 02:21:47.804234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.313 [2024-07-14 02:21:47.804264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.313 [2024-07-14 02:21:47.804279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.313 [2024-07-14 02:21:47.804292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.313 [2024-07-14 02:21:47.804320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.313 qpair failed and we were unable to recover it. 00:34:42.313 [2024-07-14 02:21:47.814064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.313 [2024-07-14 02:21:47.814206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.313 [2024-07-14 02:21:47.814232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.313 [2024-07-14 02:21:47.814247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.313 [2024-07-14 02:21:47.814260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.313 [2024-07-14 02:21:47.814287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.313 qpair failed and we were unable to recover it. 00:34:42.313 [2024-07-14 02:21:47.824110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.313 [2024-07-14 02:21:47.824281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.313 [2024-07-14 02:21:47.824307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.313 [2024-07-14 02:21:47.824321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.313 [2024-07-14 02:21:47.824334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.313 [2024-07-14 02:21:47.824363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.313 qpair failed and we were unable to recover it. 00:34:42.313 [2024-07-14 02:21:47.834151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.313 [2024-07-14 02:21:47.834307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.313 [2024-07-14 02:21:47.834332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.313 [2024-07-14 02:21:47.834347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.313 [2024-07-14 02:21:47.834360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.313 [2024-07-14 02:21:47.834388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.313 qpair failed and we were unable to recover it. 00:34:42.313 [2024-07-14 02:21:47.844195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.313 [2024-07-14 02:21:47.844345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.313 [2024-07-14 02:21:47.844370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.313 [2024-07-14 02:21:47.844385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.313 [2024-07-14 02:21:47.844398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.314 [2024-07-14 02:21:47.844431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.314 qpair failed and we were unable to recover it. 00:34:42.314 [2024-07-14 02:21:47.854247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.314 [2024-07-14 02:21:47.854403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.314 [2024-07-14 02:21:47.854428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.314 [2024-07-14 02:21:47.854443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.314 [2024-07-14 02:21:47.854456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.314 [2024-07-14 02:21:47.854484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.314 qpair failed and we were unable to recover it. 00:34:42.314 [2024-07-14 02:21:47.864237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.314 [2024-07-14 02:21:47.864443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.314 [2024-07-14 02:21:47.864467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.314 [2024-07-14 02:21:47.864482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.314 [2024-07-14 02:21:47.864494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.314 [2024-07-14 02:21:47.864521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.314 qpair failed and we were unable to recover it. 00:34:42.314 [2024-07-14 02:21:47.874233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.314 [2024-07-14 02:21:47.874389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.314 [2024-07-14 02:21:47.874414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.314 [2024-07-14 02:21:47.874428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.314 [2024-07-14 02:21:47.874441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.314 [2024-07-14 02:21:47.874468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.314 qpair failed and we were unable to recover it. 00:34:42.314 [2024-07-14 02:21:47.884253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.314 [2024-07-14 02:21:47.884400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.314 [2024-07-14 02:21:47.884425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.314 [2024-07-14 02:21:47.884440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.314 [2024-07-14 02:21:47.884452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.314 [2024-07-14 02:21:47.884480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.314 qpair failed and we were unable to recover it. 00:34:42.314 [2024-07-14 02:21:47.894301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.314 [2024-07-14 02:21:47.894464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.314 [2024-07-14 02:21:47.894493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.314 [2024-07-14 02:21:47.894508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.314 [2024-07-14 02:21:47.894522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.314 [2024-07-14 02:21:47.894549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.314 qpair failed and we were unable to recover it. 00:34:42.314 [2024-07-14 02:21:47.904325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.314 [2024-07-14 02:21:47.904477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.314 [2024-07-14 02:21:47.904501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.314 [2024-07-14 02:21:47.904516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.314 [2024-07-14 02:21:47.904528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.314 [2024-07-14 02:21:47.904555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.314 qpair failed and we were unable to recover it. 00:34:42.314 [2024-07-14 02:21:47.914338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.314 [2024-07-14 02:21:47.914483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.314 [2024-07-14 02:21:47.914508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.314 [2024-07-14 02:21:47.914522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.314 [2024-07-14 02:21:47.914535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.314 [2024-07-14 02:21:47.914561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.314 qpair failed and we were unable to recover it. 00:34:42.314 [2024-07-14 02:21:47.924356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.314 [2024-07-14 02:21:47.924504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.314 [2024-07-14 02:21:47.924529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.314 [2024-07-14 02:21:47.924543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.314 [2024-07-14 02:21:47.924556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.314 [2024-07-14 02:21:47.924583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.314 qpair failed and we were unable to recover it. 00:34:42.314 [2024-07-14 02:21:47.934409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.314 [2024-07-14 02:21:47.934568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.314 [2024-07-14 02:21:47.934592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.314 [2024-07-14 02:21:47.934607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.314 [2024-07-14 02:21:47.934625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.314 [2024-07-14 02:21:47.934653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.314 qpair failed and we were unable to recover it. 00:34:42.314 [2024-07-14 02:21:47.944417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.314 [2024-07-14 02:21:47.944570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.314 [2024-07-14 02:21:47.944594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.314 [2024-07-14 02:21:47.944608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.314 [2024-07-14 02:21:47.944621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.314 [2024-07-14 02:21:47.944648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.314 qpair failed and we were unable to recover it. 00:34:42.314 [2024-07-14 02:21:47.954491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.314 [2024-07-14 02:21:47.954641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.314 [2024-07-14 02:21:47.954665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.314 [2024-07-14 02:21:47.954679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.314 [2024-07-14 02:21:47.954692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.314 [2024-07-14 02:21:47.954720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.314 qpair failed and we were unable to recover it. 00:34:42.314 [2024-07-14 02:21:47.964515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.314 [2024-07-14 02:21:47.964663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.314 [2024-07-14 02:21:47.964687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.314 [2024-07-14 02:21:47.964702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.314 [2024-07-14 02:21:47.964714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.314 [2024-07-14 02:21:47.964742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.314 qpair failed and we were unable to recover it. 00:34:42.314 [2024-07-14 02:21:47.974538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.314 [2024-07-14 02:21:47.974719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.314 [2024-07-14 02:21:47.974745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.314 [2024-07-14 02:21:47.974766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.314 [2024-07-14 02:21:47.974780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.314 [2024-07-14 02:21:47.974811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.314 qpair failed and we were unable to recover it. 00:34:42.314 [2024-07-14 02:21:47.984574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.314 [2024-07-14 02:21:47.984732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.314 [2024-07-14 02:21:47.984757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.314 [2024-07-14 02:21:47.984772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.314 [2024-07-14 02:21:47.984785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.314 [2024-07-14 02:21:47.984812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.314 qpair failed and we were unable to recover it. 00:34:42.314 [2024-07-14 02:21:47.994630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.315 [2024-07-14 02:21:47.994828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.315 [2024-07-14 02:21:47.994853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.315 [2024-07-14 02:21:47.994876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.315 [2024-07-14 02:21:47.994893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.315 [2024-07-14 02:21:47.994921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.315 qpair failed and we were unable to recover it. 00:34:42.573 [2024-07-14 02:21:48.004601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.573 [2024-07-14 02:21:48.004753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.573 [2024-07-14 02:21:48.004789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.573 [2024-07-14 02:21:48.004809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.573 [2024-07-14 02:21:48.004822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.573 [2024-07-14 02:21:48.004852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.573 qpair failed and we were unable to recover it. 00:34:42.573 [2024-07-14 02:21:48.014623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.573 [2024-07-14 02:21:48.014776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.573 [2024-07-14 02:21:48.014802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.573 [2024-07-14 02:21:48.014817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.573 [2024-07-14 02:21:48.014831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.573 [2024-07-14 02:21:48.014859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.573 qpair failed and we were unable to recover it. 00:34:42.573 [2024-07-14 02:21:48.024745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.573 [2024-07-14 02:21:48.024900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.573 [2024-07-14 02:21:48.024926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.574 [2024-07-14 02:21:48.024940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.574 [2024-07-14 02:21:48.024958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.574 [2024-07-14 02:21:48.024988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.574 qpair failed and we were unable to recover it. 00:34:42.574 [2024-07-14 02:21:48.034706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.574 [2024-07-14 02:21:48.034860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.574 [2024-07-14 02:21:48.034897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.574 [2024-07-14 02:21:48.034913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.574 [2024-07-14 02:21:48.034927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.574 [2024-07-14 02:21:48.034956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.574 qpair failed and we were unable to recover it. 00:34:42.574 [2024-07-14 02:21:48.044732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.574 [2024-07-14 02:21:48.044884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.574 [2024-07-14 02:21:48.044910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.574 [2024-07-14 02:21:48.044925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.574 [2024-07-14 02:21:48.044938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.574 [2024-07-14 02:21:48.044966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.574 qpair failed and we were unable to recover it. 00:34:42.574 [2024-07-14 02:21:48.054747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.574 [2024-07-14 02:21:48.054906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.574 [2024-07-14 02:21:48.054931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.574 [2024-07-14 02:21:48.054946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.574 [2024-07-14 02:21:48.054959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.574 [2024-07-14 02:21:48.054987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.574 qpair failed and we were unable to recover it. 00:34:42.574 [2024-07-14 02:21:48.064780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.574 [2024-07-14 02:21:48.064944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.574 [2024-07-14 02:21:48.064970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.574 [2024-07-14 02:21:48.064984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.574 [2024-07-14 02:21:48.064997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.574 [2024-07-14 02:21:48.065025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.574 qpair failed and we were unable to recover it. 00:34:42.574 [2024-07-14 02:21:48.074835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.574 [2024-07-14 02:21:48.074999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.574 [2024-07-14 02:21:48.075025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.574 [2024-07-14 02:21:48.075040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.574 [2024-07-14 02:21:48.075053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.574 [2024-07-14 02:21:48.075082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.574 qpair failed and we were unable to recover it. 00:34:42.574 [2024-07-14 02:21:48.084839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.574 [2024-07-14 02:21:48.084990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.574 [2024-07-14 02:21:48.085015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.574 [2024-07-14 02:21:48.085029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.574 [2024-07-14 02:21:48.085041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.574 [2024-07-14 02:21:48.085070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.574 qpair failed and we were unable to recover it. 00:34:42.574 [2024-07-14 02:21:48.094946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.574 [2024-07-14 02:21:48.095091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.574 [2024-07-14 02:21:48.095116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.574 [2024-07-14 02:21:48.095131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.574 [2024-07-14 02:21:48.095144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.574 [2024-07-14 02:21:48.095172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.574 qpair failed and we were unable to recover it. 00:34:42.574 [2024-07-14 02:21:48.104923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.574 [2024-07-14 02:21:48.105078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.574 [2024-07-14 02:21:48.105103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.574 [2024-07-14 02:21:48.105117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.574 [2024-07-14 02:21:48.105130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.574 [2024-07-14 02:21:48.105157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.574 qpair failed and we were unable to recover it. 00:34:42.574 [2024-07-14 02:21:48.114923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.574 [2024-07-14 02:21:48.115070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.574 [2024-07-14 02:21:48.115095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.574 [2024-07-14 02:21:48.115109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.574 [2024-07-14 02:21:48.115127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.574 [2024-07-14 02:21:48.115156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.574 qpair failed and we were unable to recover it. 00:34:42.574 [2024-07-14 02:21:48.124961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.574 [2024-07-14 02:21:48.125116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.574 [2024-07-14 02:21:48.125142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.574 [2024-07-14 02:21:48.125156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.574 [2024-07-14 02:21:48.125170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.574 [2024-07-14 02:21:48.125197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.574 qpair failed and we were unable to recover it. 00:34:42.574 [2024-07-14 02:21:48.135047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.574 [2024-07-14 02:21:48.135218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.574 [2024-07-14 02:21:48.135242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.574 [2024-07-14 02:21:48.135256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.574 [2024-07-14 02:21:48.135269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.574 [2024-07-14 02:21:48.135296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.574 qpair failed and we were unable to recover it. 00:34:42.574 [2024-07-14 02:21:48.145136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.574 [2024-07-14 02:21:48.145288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.574 [2024-07-14 02:21:48.145314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.574 [2024-07-14 02:21:48.145328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.574 [2024-07-14 02:21:48.145341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.574 [2024-07-14 02:21:48.145369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.574 qpair failed and we were unable to recover it. 00:34:42.574 [2024-07-14 02:21:48.155037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.574 [2024-07-14 02:21:48.155186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.574 [2024-07-14 02:21:48.155211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.574 [2024-07-14 02:21:48.155225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.574 [2024-07-14 02:21:48.155238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.574 [2024-07-14 02:21:48.155265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.574 qpair failed and we were unable to recover it. 00:34:42.574 [2024-07-14 02:21:48.165085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.574 [2024-07-14 02:21:48.165231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.574 [2024-07-14 02:21:48.165256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.574 [2024-07-14 02:21:48.165271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.574 [2024-07-14 02:21:48.165284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.574 [2024-07-14 02:21:48.165311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.574 qpair failed and we were unable to recover it. 00:34:42.575 [2024-07-14 02:21:48.175115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.575 [2024-07-14 02:21:48.175263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.575 [2024-07-14 02:21:48.175289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.575 [2024-07-14 02:21:48.175303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.575 [2024-07-14 02:21:48.175316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.575 [2024-07-14 02:21:48.175344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.575 qpair failed and we were unable to recover it. 00:34:42.575 [2024-07-14 02:21:48.185174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.575 [2024-07-14 02:21:48.185344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.575 [2024-07-14 02:21:48.185370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.575 [2024-07-14 02:21:48.185384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.575 [2024-07-14 02:21:48.185398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.575 [2024-07-14 02:21:48.185426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.575 qpair failed and we were unable to recover it. 00:34:42.575 [2024-07-14 02:21:48.195276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.575 [2024-07-14 02:21:48.195448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.575 [2024-07-14 02:21:48.195473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.575 [2024-07-14 02:21:48.195487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.575 [2024-07-14 02:21:48.195500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.575 [2024-07-14 02:21:48.195528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.575 qpair failed and we were unable to recover it. 00:34:42.575 [2024-07-14 02:21:48.205227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.575 [2024-07-14 02:21:48.205377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.575 [2024-07-14 02:21:48.205402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.575 [2024-07-14 02:21:48.205423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.575 [2024-07-14 02:21:48.205438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.575 [2024-07-14 02:21:48.205465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.575 qpair failed and we were unable to recover it. 00:34:42.575 [2024-07-14 02:21:48.215221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.575 [2024-07-14 02:21:48.215371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.575 [2024-07-14 02:21:48.215395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.575 [2024-07-14 02:21:48.215409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.575 [2024-07-14 02:21:48.215422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.575 [2024-07-14 02:21:48.215449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.575 qpair failed and we were unable to recover it. 00:34:42.575 [2024-07-14 02:21:48.225257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.575 [2024-07-14 02:21:48.225406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.575 [2024-07-14 02:21:48.225431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.575 [2024-07-14 02:21:48.225445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.575 [2024-07-14 02:21:48.225458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.575 [2024-07-14 02:21:48.225486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.575 qpair failed and we were unable to recover it. 00:34:42.575 [2024-07-14 02:21:48.235301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.575 [2024-07-14 02:21:48.235461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.575 [2024-07-14 02:21:48.235486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.575 [2024-07-14 02:21:48.235500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.575 [2024-07-14 02:21:48.235513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.575 [2024-07-14 02:21:48.235540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.575 qpair failed and we were unable to recover it. 00:34:42.575 [2024-07-14 02:21:48.245288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.575 [2024-07-14 02:21:48.245436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.575 [2024-07-14 02:21:48.245462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.575 [2024-07-14 02:21:48.245476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.575 [2024-07-14 02:21:48.245489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.575 [2024-07-14 02:21:48.245517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.575 qpair failed and we were unable to recover it. 00:34:42.575 [2024-07-14 02:21:48.255326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.575 [2024-07-14 02:21:48.255505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.575 [2024-07-14 02:21:48.255530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.575 [2024-07-14 02:21:48.255545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.575 [2024-07-14 02:21:48.255558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.575 [2024-07-14 02:21:48.255586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.575 qpair failed and we were unable to recover it. 00:34:42.836 [2024-07-14 02:21:48.265398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.836 [2024-07-14 02:21:48.265554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.836 [2024-07-14 02:21:48.265580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.836 [2024-07-14 02:21:48.265596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.836 [2024-07-14 02:21:48.265609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.836 [2024-07-14 02:21:48.265637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.836 qpair failed and we were unable to recover it. 00:34:42.836 [2024-07-14 02:21:48.275408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.836 [2024-07-14 02:21:48.275584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.836 [2024-07-14 02:21:48.275609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.836 [2024-07-14 02:21:48.275624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.836 [2024-07-14 02:21:48.275637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.836 [2024-07-14 02:21:48.275665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.836 qpair failed and we were unable to recover it. 00:34:42.836 [2024-07-14 02:21:48.285495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.836 [2024-07-14 02:21:48.285642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.837 [2024-07-14 02:21:48.285667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.837 [2024-07-14 02:21:48.285682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.837 [2024-07-14 02:21:48.285694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.837 [2024-07-14 02:21:48.285722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.837 qpair failed and we were unable to recover it. 00:34:42.837 [2024-07-14 02:21:48.295421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.837 [2024-07-14 02:21:48.295566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.837 [2024-07-14 02:21:48.295592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.837 [2024-07-14 02:21:48.295612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.837 [2024-07-14 02:21:48.295626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.837 [2024-07-14 02:21:48.295654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.837 qpair failed and we were unable to recover it. 00:34:42.837 [2024-07-14 02:21:48.305496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.837 [2024-07-14 02:21:48.305650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.837 [2024-07-14 02:21:48.305675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.837 [2024-07-14 02:21:48.305689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.837 [2024-07-14 02:21:48.305702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.837 [2024-07-14 02:21:48.305730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.837 qpair failed and we were unable to recover it. 00:34:42.837 [2024-07-14 02:21:48.315512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.837 [2024-07-14 02:21:48.315691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.837 [2024-07-14 02:21:48.315716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.837 [2024-07-14 02:21:48.315731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.837 [2024-07-14 02:21:48.315744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.837 [2024-07-14 02:21:48.315771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.837 qpair failed and we were unable to recover it. 00:34:42.837 [2024-07-14 02:21:48.325566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.837 [2024-07-14 02:21:48.325740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.837 [2024-07-14 02:21:48.325765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.837 [2024-07-14 02:21:48.325779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.837 [2024-07-14 02:21:48.325792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.837 [2024-07-14 02:21:48.325820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.837 qpair failed and we were unable to recover it. 00:34:42.837 [2024-07-14 02:21:48.335570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.837 [2024-07-14 02:21:48.335719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.837 [2024-07-14 02:21:48.335744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.837 [2024-07-14 02:21:48.335759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.837 [2024-07-14 02:21:48.335772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.837 [2024-07-14 02:21:48.335799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.837 qpair failed and we were unable to recover it. 00:34:42.837 [2024-07-14 02:21:48.345606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.837 [2024-07-14 02:21:48.345785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.837 [2024-07-14 02:21:48.345810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.837 [2024-07-14 02:21:48.345825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.837 [2024-07-14 02:21:48.345838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.837 [2024-07-14 02:21:48.345872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.837 qpair failed and we were unable to recover it. 00:34:42.837 [2024-07-14 02:21:48.355610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.837 [2024-07-14 02:21:48.355765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.837 [2024-07-14 02:21:48.355790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.837 [2024-07-14 02:21:48.355805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.837 [2024-07-14 02:21:48.355816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.837 [2024-07-14 02:21:48.355844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.837 qpair failed and we were unable to recover it. 00:34:42.837 [2024-07-14 02:21:48.365671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.837 [2024-07-14 02:21:48.365855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.837 [2024-07-14 02:21:48.365886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.837 [2024-07-14 02:21:48.365902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.837 [2024-07-14 02:21:48.365915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.837 [2024-07-14 02:21:48.365942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.837 qpair failed and we were unable to recover it. 00:34:42.837 [2024-07-14 02:21:48.375682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.837 [2024-07-14 02:21:48.375863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.837 [2024-07-14 02:21:48.375895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.837 [2024-07-14 02:21:48.375910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.837 [2024-07-14 02:21:48.375923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.837 [2024-07-14 02:21:48.375950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.837 qpair failed and we were unable to recover it. 00:34:42.837 [2024-07-14 02:21:48.385716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.837 [2024-07-14 02:21:48.385871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.837 [2024-07-14 02:21:48.385901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.837 [2024-07-14 02:21:48.385916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.837 [2024-07-14 02:21:48.385929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.837 [2024-07-14 02:21:48.385956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.837 qpair failed and we were unable to recover it. 00:34:42.837 [2024-07-14 02:21:48.395735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.837 [2024-07-14 02:21:48.395939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.837 [2024-07-14 02:21:48.395963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.837 [2024-07-14 02:21:48.395977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.837 [2024-07-14 02:21:48.395989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.837 [2024-07-14 02:21:48.396016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.838 qpair failed and we were unable to recover it. 00:34:42.838 [2024-07-14 02:21:48.405744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.838 [2024-07-14 02:21:48.405902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.838 [2024-07-14 02:21:48.405928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.838 [2024-07-14 02:21:48.405942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.838 [2024-07-14 02:21:48.405955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.838 [2024-07-14 02:21:48.405982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.838 qpair failed and we were unable to recover it. 00:34:42.838 [2024-07-14 02:21:48.415858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.838 [2024-07-14 02:21:48.416016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.838 [2024-07-14 02:21:48.416041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.838 [2024-07-14 02:21:48.416056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.838 [2024-07-14 02:21:48.416068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.838 [2024-07-14 02:21:48.416096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.838 qpair failed and we were unable to recover it. 00:34:42.838 [2024-07-14 02:21:48.425829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.838 [2024-07-14 02:21:48.425992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.838 [2024-07-14 02:21:48.426018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.838 [2024-07-14 02:21:48.426033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.838 [2024-07-14 02:21:48.426046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.838 [2024-07-14 02:21:48.426074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.838 qpair failed and we were unable to recover it. 00:34:42.838 [2024-07-14 02:21:48.435847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.838 [2024-07-14 02:21:48.436041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.838 [2024-07-14 02:21:48.436066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.838 [2024-07-14 02:21:48.436081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.838 [2024-07-14 02:21:48.436094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.838 [2024-07-14 02:21:48.436122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.838 qpair failed and we were unable to recover it. 00:34:42.838 [2024-07-14 02:21:48.445896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.838 [2024-07-14 02:21:48.446064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.838 [2024-07-14 02:21:48.446089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.838 [2024-07-14 02:21:48.446103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.838 [2024-07-14 02:21:48.446116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.838 [2024-07-14 02:21:48.446144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.838 qpair failed and we were unable to recover it. 00:34:42.838 [2024-07-14 02:21:48.455917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.838 [2024-07-14 02:21:48.456094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.838 [2024-07-14 02:21:48.456121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.838 [2024-07-14 02:21:48.456141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.838 [2024-07-14 02:21:48.456154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.838 [2024-07-14 02:21:48.456183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.838 qpair failed and we were unable to recover it. 00:34:42.838 [2024-07-14 02:21:48.465960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.838 [2024-07-14 02:21:48.466120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.838 [2024-07-14 02:21:48.466146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.838 [2024-07-14 02:21:48.466160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.838 [2024-07-14 02:21:48.466173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.838 [2024-07-14 02:21:48.466201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.838 qpair failed and we were unable to recover it. 00:34:42.838 [2024-07-14 02:21:48.475952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.838 [2024-07-14 02:21:48.476106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.838 [2024-07-14 02:21:48.476136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.838 [2024-07-14 02:21:48.476153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.838 [2024-07-14 02:21:48.476166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.838 [2024-07-14 02:21:48.476193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.838 qpair failed and we were unable to recover it. 00:34:42.838 [2024-07-14 02:21:48.486044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.838 [2024-07-14 02:21:48.486216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.838 [2024-07-14 02:21:48.486241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.838 [2024-07-14 02:21:48.486256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.838 [2024-07-14 02:21:48.486269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.838 [2024-07-14 02:21:48.486297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.838 qpair failed and we were unable to recover it. 00:34:42.838 [2024-07-14 02:21:48.496039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.838 [2024-07-14 02:21:48.496224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.838 [2024-07-14 02:21:48.496249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.838 [2024-07-14 02:21:48.496263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.838 [2024-07-14 02:21:48.496276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.838 [2024-07-14 02:21:48.496304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.838 qpair failed and we were unable to recover it. 00:34:42.838 [2024-07-14 02:21:48.506162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.838 [2024-07-14 02:21:48.506328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.838 [2024-07-14 02:21:48.506354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.838 [2024-07-14 02:21:48.506368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.838 [2024-07-14 02:21:48.506381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.839 [2024-07-14 02:21:48.506408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.839 qpair failed and we were unable to recover it. 00:34:42.839 [2024-07-14 02:21:48.516099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.839 [2024-07-14 02:21:48.516246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.839 [2024-07-14 02:21:48.516271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.839 [2024-07-14 02:21:48.516286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.839 [2024-07-14 02:21:48.516298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.839 [2024-07-14 02:21:48.516336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.839 qpair failed and we were unable to recover it. 00:34:42.839 [2024-07-14 02:21:48.526097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.839 [2024-07-14 02:21:48.526248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.839 [2024-07-14 02:21:48.526274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.839 [2024-07-14 02:21:48.526288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.839 [2024-07-14 02:21:48.526301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:42.839 [2024-07-14 02:21:48.526329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.839 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-14 02:21:48.536134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.099 [2024-07-14 02:21:48.536290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.099 [2024-07-14 02:21:48.536316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.099 [2024-07-14 02:21:48.536332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.099 [2024-07-14 02:21:48.536345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.099 [2024-07-14 02:21:48.536373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-14 02:21:48.546184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.099 [2024-07-14 02:21:48.546337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.099 [2024-07-14 02:21:48.546362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.099 [2024-07-14 02:21:48.546376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.099 [2024-07-14 02:21:48.546389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.099 [2024-07-14 02:21:48.546417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-14 02:21:48.556201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.099 [2024-07-14 02:21:48.556368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.099 [2024-07-14 02:21:48.556395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.099 [2024-07-14 02:21:48.556409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.099 [2024-07-14 02:21:48.556422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.099 [2024-07-14 02:21:48.556450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-14 02:21:48.566210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.099 [2024-07-14 02:21:48.566356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.099 [2024-07-14 02:21:48.566387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.099 [2024-07-14 02:21:48.566402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.099 [2024-07-14 02:21:48.566415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.099 [2024-07-14 02:21:48.566442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-14 02:21:48.576311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.099 [2024-07-14 02:21:48.576528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.099 [2024-07-14 02:21:48.576554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.099 [2024-07-14 02:21:48.576569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.099 [2024-07-14 02:21:48.576586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.099 [2024-07-14 02:21:48.576615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-14 02:21:48.586306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.099 [2024-07-14 02:21:48.586478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.099 [2024-07-14 02:21:48.586504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.099 [2024-07-14 02:21:48.586518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.099 [2024-07-14 02:21:48.586531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.099 [2024-07-14 02:21:48.586559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-14 02:21:48.596315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.099 [2024-07-14 02:21:48.596465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.099 [2024-07-14 02:21:48.596491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.099 [2024-07-14 02:21:48.596506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.099 [2024-07-14 02:21:48.596519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.099 [2024-07-14 02:21:48.596548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-14 02:21:48.606361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.099 [2024-07-14 02:21:48.606511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.099 [2024-07-14 02:21:48.606537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.100 [2024-07-14 02:21:48.606551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.100 [2024-07-14 02:21:48.606564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.100 [2024-07-14 02:21:48.606598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-14 02:21:48.616383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.100 [2024-07-14 02:21:48.616534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.100 [2024-07-14 02:21:48.616560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.100 [2024-07-14 02:21:48.616575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.100 [2024-07-14 02:21:48.616587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.100 [2024-07-14 02:21:48.616615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-14 02:21:48.626476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.100 [2024-07-14 02:21:48.626662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.100 [2024-07-14 02:21:48.626689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.100 [2024-07-14 02:21:48.626709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.100 [2024-07-14 02:21:48.626722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.100 [2024-07-14 02:21:48.626752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-14 02:21:48.636410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.100 [2024-07-14 02:21:48.636552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.100 [2024-07-14 02:21:48.636579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.100 [2024-07-14 02:21:48.636593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.100 [2024-07-14 02:21:48.636606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.100 [2024-07-14 02:21:48.636634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-14 02:21:48.646433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.100 [2024-07-14 02:21:48.646578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.100 [2024-07-14 02:21:48.646604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.100 [2024-07-14 02:21:48.646618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.100 [2024-07-14 02:21:48.646631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.100 [2024-07-14 02:21:48.646659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-14 02:21:48.656454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.100 [2024-07-14 02:21:48.656601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.100 [2024-07-14 02:21:48.656631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.100 [2024-07-14 02:21:48.656646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.100 [2024-07-14 02:21:48.656660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.100 [2024-07-14 02:21:48.656688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-14 02:21:48.666538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.100 [2024-07-14 02:21:48.666700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.100 [2024-07-14 02:21:48.666725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.100 [2024-07-14 02:21:48.666740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.100 [2024-07-14 02:21:48.666752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.100 [2024-07-14 02:21:48.666780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-14 02:21:48.676532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.100 [2024-07-14 02:21:48.676682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.100 [2024-07-14 02:21:48.676708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.100 [2024-07-14 02:21:48.676722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.100 [2024-07-14 02:21:48.676735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.100 [2024-07-14 02:21:48.676762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-14 02:21:48.686547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.100 [2024-07-14 02:21:48.686700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.100 [2024-07-14 02:21:48.686725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.100 [2024-07-14 02:21:48.686739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.100 [2024-07-14 02:21:48.686753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.100 [2024-07-14 02:21:48.686780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-14 02:21:48.696571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.100 [2024-07-14 02:21:48.696719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.100 [2024-07-14 02:21:48.696744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.100 [2024-07-14 02:21:48.696759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.100 [2024-07-14 02:21:48.696777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.100 [2024-07-14 02:21:48.696806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-14 02:21:48.706603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.100 [2024-07-14 02:21:48.706761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.100 [2024-07-14 02:21:48.706787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.100 [2024-07-14 02:21:48.706802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.100 [2024-07-14 02:21:48.706815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.100 [2024-07-14 02:21:48.706843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-14 02:21:48.716719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.100 [2024-07-14 02:21:48.716884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.100 [2024-07-14 02:21:48.716911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.100 [2024-07-14 02:21:48.716930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.100 [2024-07-14 02:21:48.716944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.100 [2024-07-14 02:21:48.716973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-14 02:21:48.726753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.100 [2024-07-14 02:21:48.726904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.100 [2024-07-14 02:21:48.726930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.100 [2024-07-14 02:21:48.726945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.100 [2024-07-14 02:21:48.726958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.100 [2024-07-14 02:21:48.726986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-14 02:21:48.736690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.100 [2024-07-14 02:21:48.736850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.100 [2024-07-14 02:21:48.736885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.100 [2024-07-14 02:21:48.736900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.100 [2024-07-14 02:21:48.736913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.100 [2024-07-14 02:21:48.736941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-14 02:21:48.746792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.100 [2024-07-14 02:21:48.746958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.100 [2024-07-14 02:21:48.746984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.100 [2024-07-14 02:21:48.746998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.100 [2024-07-14 02:21:48.747011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.100 [2024-07-14 02:21:48.747039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-14 02:21:48.756751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.101 [2024-07-14 02:21:48.756907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.101 [2024-07-14 02:21:48.756932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.101 [2024-07-14 02:21:48.756947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.101 [2024-07-14 02:21:48.756960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.101 [2024-07-14 02:21:48.756988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-14 02:21:48.766821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.101 [2024-07-14 02:21:48.766978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.101 [2024-07-14 02:21:48.767006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.101 [2024-07-14 02:21:48.767021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.101 [2024-07-14 02:21:48.767034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.101 [2024-07-14 02:21:48.767062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-14 02:21:48.776809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.101 [2024-07-14 02:21:48.776962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.101 [2024-07-14 02:21:48.776987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.101 [2024-07-14 02:21:48.777002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.101 [2024-07-14 02:21:48.777015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.101 [2024-07-14 02:21:48.777043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-14 02:21:48.786907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.101 [2024-07-14 02:21:48.787078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.101 [2024-07-14 02:21:48.787103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.101 [2024-07-14 02:21:48.787118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.101 [2024-07-14 02:21:48.787137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.101 [2024-07-14 02:21:48.787165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.362 [2024-07-14 02:21:48.796892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.362 [2024-07-14 02:21:48.797104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.362 [2024-07-14 02:21:48.797131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.362 [2024-07-14 02:21:48.797150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.362 [2024-07-14 02:21:48.797164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.362 [2024-07-14 02:21:48.797194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.362 qpair failed and we were unable to recover it. 00:34:43.362 [2024-07-14 02:21:48.806890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.362 [2024-07-14 02:21:48.807037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.362 [2024-07-14 02:21:48.807063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.362 [2024-07-14 02:21:48.807077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.362 [2024-07-14 02:21:48.807090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.362 [2024-07-14 02:21:48.807118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.362 qpair failed and we were unable to recover it. 00:34:43.362 [2024-07-14 02:21:48.816954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.362 [2024-07-14 02:21:48.817119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.362 [2024-07-14 02:21:48.817144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.362 [2024-07-14 02:21:48.817159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.362 [2024-07-14 02:21:48.817172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.362 [2024-07-14 02:21:48.817200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.362 qpair failed and we were unable to recover it. 00:34:43.362 [2024-07-14 02:21:48.826972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.362 [2024-07-14 02:21:48.827124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.362 [2024-07-14 02:21:48.827149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.362 [2024-07-14 02:21:48.827164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.362 [2024-07-14 02:21:48.827176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.362 [2024-07-14 02:21:48.827204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.362 qpair failed and we were unable to recover it. 00:34:43.362 [2024-07-14 02:21:48.837004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.362 [2024-07-14 02:21:48.837197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.362 [2024-07-14 02:21:48.837222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.362 [2024-07-14 02:21:48.837237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.362 [2024-07-14 02:21:48.837250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.362 [2024-07-14 02:21:48.837278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.362 qpair failed and we were unable to recover it. 00:34:43.362 [2024-07-14 02:21:48.847019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.362 [2024-07-14 02:21:48.847170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.362 [2024-07-14 02:21:48.847195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.362 [2024-07-14 02:21:48.847210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.362 [2024-07-14 02:21:48.847223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.362 [2024-07-14 02:21:48.847251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.362 qpair failed and we were unable to recover it. 00:34:43.362 [2024-07-14 02:21:48.857049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.362 [2024-07-14 02:21:48.857195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.362 [2024-07-14 02:21:48.857220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.362 [2024-07-14 02:21:48.857235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.362 [2024-07-14 02:21:48.857247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.362 [2024-07-14 02:21:48.857275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.362 qpair failed and we were unable to recover it. 00:34:43.362 [2024-07-14 02:21:48.867081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.362 [2024-07-14 02:21:48.867242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.362 [2024-07-14 02:21:48.867267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.362 [2024-07-14 02:21:48.867282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.362 [2024-07-14 02:21:48.867295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.362 [2024-07-14 02:21:48.867322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.362 qpair failed and we were unable to recover it. 00:34:43.362 [2024-07-14 02:21:48.877153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.362 [2024-07-14 02:21:48.877338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.362 [2024-07-14 02:21:48.877365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.362 [2024-07-14 02:21:48.877385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.362 [2024-07-14 02:21:48.877404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.362 [2024-07-14 02:21:48.877434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.362 qpair failed and we were unable to recover it. 00:34:43.362 [2024-07-14 02:21:48.887177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.362 [2024-07-14 02:21:48.887326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.362 [2024-07-14 02:21:48.887352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.362 [2024-07-14 02:21:48.887367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.362 [2024-07-14 02:21:48.887380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.362 [2024-07-14 02:21:48.887408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.362 qpair failed and we were unable to recover it. 00:34:43.362 [2024-07-14 02:21:48.897184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.362 [2024-07-14 02:21:48.897329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.362 [2024-07-14 02:21:48.897355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.362 [2024-07-14 02:21:48.897369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.362 [2024-07-14 02:21:48.897382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.362 [2024-07-14 02:21:48.897410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.362 qpair failed and we were unable to recover it. 00:34:43.362 [2024-07-14 02:21:48.907187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.362 [2024-07-14 02:21:48.907337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.362 [2024-07-14 02:21:48.907363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.362 [2024-07-14 02:21:48.907377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.362 [2024-07-14 02:21:48.907390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.362 [2024-07-14 02:21:48.907417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.362 qpair failed and we were unable to recover it. 00:34:43.362 [2024-07-14 02:21:48.917206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.362 [2024-07-14 02:21:48.917352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.362 [2024-07-14 02:21:48.917377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.362 [2024-07-14 02:21:48.917391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.362 [2024-07-14 02:21:48.917404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.362 [2024-07-14 02:21:48.917431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.362 qpair failed and we were unable to recover it. 00:34:43.362 [2024-07-14 02:21:48.927243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.362 [2024-07-14 02:21:48.927395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.362 [2024-07-14 02:21:48.927421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.362 [2024-07-14 02:21:48.927435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.362 [2024-07-14 02:21:48.927448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.363 [2024-07-14 02:21:48.927476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.363 qpair failed and we were unable to recover it. 00:34:43.363 [2024-07-14 02:21:48.937316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.363 [2024-07-14 02:21:48.937464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.363 [2024-07-14 02:21:48.937490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.363 [2024-07-14 02:21:48.937504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.363 [2024-07-14 02:21:48.937517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.363 [2024-07-14 02:21:48.937545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.363 qpair failed and we were unable to recover it. 00:34:43.363 [2024-07-14 02:21:48.947299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.363 [2024-07-14 02:21:48.947462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.363 [2024-07-14 02:21:48.947487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.363 [2024-07-14 02:21:48.947502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.363 [2024-07-14 02:21:48.947514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.363 [2024-07-14 02:21:48.947542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.363 qpair failed and we were unable to recover it. 00:34:43.363 [2024-07-14 02:21:48.957332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.363 [2024-07-14 02:21:48.957479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.363 [2024-07-14 02:21:48.957504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.363 [2024-07-14 02:21:48.957518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.363 [2024-07-14 02:21:48.957530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.363 [2024-07-14 02:21:48.957558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.363 qpair failed and we were unable to recover it. 00:34:43.363 [2024-07-14 02:21:48.967376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.363 [2024-07-14 02:21:48.967547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.363 [2024-07-14 02:21:48.967572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.363 [2024-07-14 02:21:48.967592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.363 [2024-07-14 02:21:48.967606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.363 [2024-07-14 02:21:48.967635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.363 qpair failed and we were unable to recover it. 00:34:43.363 [2024-07-14 02:21:48.977408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.363 [2024-07-14 02:21:48.977559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.363 [2024-07-14 02:21:48.977584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.363 [2024-07-14 02:21:48.977598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.363 [2024-07-14 02:21:48.977611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.363 [2024-07-14 02:21:48.977639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.363 qpair failed and we were unable to recover it. 00:34:43.363 [2024-07-14 02:21:48.987445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.363 [2024-07-14 02:21:48.987604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.363 [2024-07-14 02:21:48.987630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.363 [2024-07-14 02:21:48.987644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.363 [2024-07-14 02:21:48.987657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.363 [2024-07-14 02:21:48.987684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.363 qpair failed and we were unable to recover it. 00:34:43.363 [2024-07-14 02:21:48.997501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.363 [2024-07-14 02:21:48.997693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.363 [2024-07-14 02:21:48.997719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.363 [2024-07-14 02:21:48.997733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.363 [2024-07-14 02:21:48.997749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.363 [2024-07-14 02:21:48.997779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.363 qpair failed and we were unable to recover it. 00:34:43.363 [2024-07-14 02:21:49.007468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.363 [2024-07-14 02:21:49.007611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.363 [2024-07-14 02:21:49.007637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.363 [2024-07-14 02:21:49.007651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.363 [2024-07-14 02:21:49.007664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.363 [2024-07-14 02:21:49.007692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.363 qpair failed and we were unable to recover it. 00:34:43.363 [2024-07-14 02:21:49.017565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.363 [2024-07-14 02:21:49.017720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.363 [2024-07-14 02:21:49.017746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.363 [2024-07-14 02:21:49.017765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.363 [2024-07-14 02:21:49.017780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.363 [2024-07-14 02:21:49.017809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.363 qpair failed and we were unable to recover it. 00:34:43.363 [2024-07-14 02:21:49.027575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.363 [2024-07-14 02:21:49.027729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.363 [2024-07-14 02:21:49.027755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.363 [2024-07-14 02:21:49.027769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.363 [2024-07-14 02:21:49.027782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.363 [2024-07-14 02:21:49.027810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.363 qpair failed and we were unable to recover it. 00:34:43.363 [2024-07-14 02:21:49.037563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.363 [2024-07-14 02:21:49.037714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.363 [2024-07-14 02:21:49.037740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.363 [2024-07-14 02:21:49.037754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.363 [2024-07-14 02:21:49.037767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.363 [2024-07-14 02:21:49.037795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.363 qpair failed and we were unable to recover it. 00:34:43.363 [2024-07-14 02:21:49.047603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.363 [2024-07-14 02:21:49.047797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.363 [2024-07-14 02:21:49.047823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.363 [2024-07-14 02:21:49.047837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.363 [2024-07-14 02:21:49.047850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.363 [2024-07-14 02:21:49.047885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.363 qpair failed and we were unable to recover it. 00:34:43.623 [2024-07-14 02:21:49.057639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.623 [2024-07-14 02:21:49.057790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.623 [2024-07-14 02:21:49.057816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.623 [2024-07-14 02:21:49.057837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.624 [2024-07-14 02:21:49.057851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.624 [2024-07-14 02:21:49.057886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.624 qpair failed and we were unable to recover it. 00:34:43.624 [2024-07-14 02:21:49.067721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.624 [2024-07-14 02:21:49.067884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.624 [2024-07-14 02:21:49.067910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.624 [2024-07-14 02:21:49.067925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.624 [2024-07-14 02:21:49.067937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.624 [2024-07-14 02:21:49.067965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.624 qpair failed and we were unable to recover it. 00:34:43.624 [2024-07-14 02:21:49.077720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.624 [2024-07-14 02:21:49.077900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.624 [2024-07-14 02:21:49.077925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.624 [2024-07-14 02:21:49.077939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.624 [2024-07-14 02:21:49.077953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.624 [2024-07-14 02:21:49.077982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.624 qpair failed and we were unable to recover it. 00:34:43.624 [2024-07-14 02:21:49.087853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.624 [2024-07-14 02:21:49.088014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.624 [2024-07-14 02:21:49.088039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.624 [2024-07-14 02:21:49.088054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.624 [2024-07-14 02:21:49.088067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.624 [2024-07-14 02:21:49.088094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.624 qpair failed and we were unable to recover it. 00:34:43.624 [2024-07-14 02:21:49.097757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.624 [2024-07-14 02:21:49.097922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.624 [2024-07-14 02:21:49.097949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.624 [2024-07-14 02:21:49.097968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.624 [2024-07-14 02:21:49.097981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.624 [2024-07-14 02:21:49.098010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.624 qpair failed and we were unable to recover it. 00:34:43.624 [2024-07-14 02:21:49.107808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.624 [2024-07-14 02:21:49.108031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.624 [2024-07-14 02:21:49.108057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.624 [2024-07-14 02:21:49.108072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.624 [2024-07-14 02:21:49.108085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.624 [2024-07-14 02:21:49.108113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.624 qpair failed and we were unable to recover it. 00:34:43.624 [2024-07-14 02:21:49.117814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.624 [2024-07-14 02:21:49.117961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.624 [2024-07-14 02:21:49.117987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.624 [2024-07-14 02:21:49.118001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.624 [2024-07-14 02:21:49.118015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.624 [2024-07-14 02:21:49.118043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.624 qpair failed and we were unable to recover it. 00:34:43.624 [2024-07-14 02:21:49.127829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.624 [2024-07-14 02:21:49.127984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.624 [2024-07-14 02:21:49.128010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.624 [2024-07-14 02:21:49.128025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.624 [2024-07-14 02:21:49.128037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.624 [2024-07-14 02:21:49.128065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.624 qpair failed and we were unable to recover it. 00:34:43.624 [2024-07-14 02:21:49.137875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.624 [2024-07-14 02:21:49.138041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.624 [2024-07-14 02:21:49.138067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.624 [2024-07-14 02:21:49.138082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.624 [2024-07-14 02:21:49.138095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.624 [2024-07-14 02:21:49.138122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.624 qpair failed and we were unable to recover it. 00:34:43.624 [2024-07-14 02:21:49.147921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.624 [2024-07-14 02:21:49.148080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.624 [2024-07-14 02:21:49.148105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.624 [2024-07-14 02:21:49.148126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.624 [2024-07-14 02:21:49.148141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.624 [2024-07-14 02:21:49.148169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.624 qpair failed and we were unable to recover it. 00:34:43.624 [2024-07-14 02:21:49.157946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.624 [2024-07-14 02:21:49.158106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.624 [2024-07-14 02:21:49.158131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.624 [2024-07-14 02:21:49.158146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.624 [2024-07-14 02:21:49.158159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.624 [2024-07-14 02:21:49.158187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.624 qpair failed and we were unable to recover it. 00:34:43.624 [2024-07-14 02:21:49.168022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.624 [2024-07-14 02:21:49.168170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.624 [2024-07-14 02:21:49.168195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.624 [2024-07-14 02:21:49.168210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.624 [2024-07-14 02:21:49.168223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.624 [2024-07-14 02:21:49.168251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.624 qpair failed and we were unable to recover it. 00:34:43.624 [2024-07-14 02:21:49.178003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.624 [2024-07-14 02:21:49.178164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.624 [2024-07-14 02:21:49.178189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.624 [2024-07-14 02:21:49.178204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.624 [2024-07-14 02:21:49.178217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.624 [2024-07-14 02:21:49.178244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.624 qpair failed and we were unable to recover it. 00:34:43.624 [2024-07-14 02:21:49.188054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.624 [2024-07-14 02:21:49.188206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.624 [2024-07-14 02:21:49.188231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.624 [2024-07-14 02:21:49.188246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.624 [2024-07-14 02:21:49.188259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.624 [2024-07-14 02:21:49.188286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.624 qpair failed and we were unable to recover it. 00:34:43.624 [2024-07-14 02:21:49.198068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.624 [2024-07-14 02:21:49.198216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.624 [2024-07-14 02:21:49.198241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.624 [2024-07-14 02:21:49.198255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.624 [2024-07-14 02:21:49.198268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.625 [2024-07-14 02:21:49.198295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.625 qpair failed and we were unable to recover it. 00:34:43.625 [2024-07-14 02:21:49.208112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.625 [2024-07-14 02:21:49.208279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.625 [2024-07-14 02:21:49.208304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.625 [2024-07-14 02:21:49.208319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.625 [2024-07-14 02:21:49.208332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.625 [2024-07-14 02:21:49.208359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.625 qpair failed and we were unable to recover it. 00:34:43.625 [2024-07-14 02:21:49.218223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.625 [2024-07-14 02:21:49.218378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.625 [2024-07-14 02:21:49.218404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.625 [2024-07-14 02:21:49.218418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.625 [2024-07-14 02:21:49.218431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.625 [2024-07-14 02:21:49.218458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.625 qpair failed and we were unable to recover it. 00:34:43.625 [2024-07-14 02:21:49.228147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.625 [2024-07-14 02:21:49.228297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.625 [2024-07-14 02:21:49.228323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.625 [2024-07-14 02:21:49.228337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.625 [2024-07-14 02:21:49.228350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.625 [2024-07-14 02:21:49.228377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.625 qpair failed and we were unable to recover it. 00:34:43.625 [2024-07-14 02:21:49.238156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.625 [2024-07-14 02:21:49.238310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.625 [2024-07-14 02:21:49.238340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.625 [2024-07-14 02:21:49.238355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.625 [2024-07-14 02:21:49.238368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.625 [2024-07-14 02:21:49.238395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.625 qpair failed and we were unable to recover it. 00:34:43.625 [2024-07-14 02:21:49.248181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.625 [2024-07-14 02:21:49.248327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.625 [2024-07-14 02:21:49.248352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.625 [2024-07-14 02:21:49.248366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.625 [2024-07-14 02:21:49.248379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.625 [2024-07-14 02:21:49.248406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.625 qpair failed and we were unable to recover it. 00:34:43.625 [2024-07-14 02:21:49.258203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.625 [2024-07-14 02:21:49.258352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.625 [2024-07-14 02:21:49.258377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.625 [2024-07-14 02:21:49.258391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.625 [2024-07-14 02:21:49.258404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.625 [2024-07-14 02:21:49.258433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.625 qpair failed and we were unable to recover it. 00:34:43.625 [2024-07-14 02:21:49.268254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.625 [2024-07-14 02:21:49.268449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.625 [2024-07-14 02:21:49.268475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.625 [2024-07-14 02:21:49.268490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.625 [2024-07-14 02:21:49.268503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.625 [2024-07-14 02:21:49.268530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.625 qpair failed and we were unable to recover it. 00:34:43.625 [2024-07-14 02:21:49.278350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.625 [2024-07-14 02:21:49.278502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.625 [2024-07-14 02:21:49.278527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.625 [2024-07-14 02:21:49.278542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.625 [2024-07-14 02:21:49.278555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.625 [2024-07-14 02:21:49.278592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.625 qpair failed and we were unable to recover it. 00:34:43.625 [2024-07-14 02:21:49.288310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.625 [2024-07-14 02:21:49.288505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.625 [2024-07-14 02:21:49.288530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.625 [2024-07-14 02:21:49.288544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.625 [2024-07-14 02:21:49.288557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.625 [2024-07-14 02:21:49.288585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.625 qpair failed and we were unable to recover it. 00:34:43.625 [2024-07-14 02:21:49.298319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.625 [2024-07-14 02:21:49.298460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.625 [2024-07-14 02:21:49.298487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.625 [2024-07-14 02:21:49.298502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.625 [2024-07-14 02:21:49.298515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.625 [2024-07-14 02:21:49.298543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.625 qpair failed and we were unable to recover it. 00:34:43.625 [2024-07-14 02:21:49.308383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.625 [2024-07-14 02:21:49.308533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.625 [2024-07-14 02:21:49.308558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.625 [2024-07-14 02:21:49.308573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.625 [2024-07-14 02:21:49.308586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.625 [2024-07-14 02:21:49.308613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.625 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-14 02:21:49.318421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.884 [2024-07-14 02:21:49.318568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.884 [2024-07-14 02:21:49.318595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.884 [2024-07-14 02:21:49.318609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.884 [2024-07-14 02:21:49.318623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.884 [2024-07-14 02:21:49.318651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-14 02:21:49.328408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.884 [2024-07-14 02:21:49.328547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.884 [2024-07-14 02:21:49.328578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.884 [2024-07-14 02:21:49.328594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.884 [2024-07-14 02:21:49.328607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.884 [2024-07-14 02:21:49.328635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-14 02:21:49.338432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.884 [2024-07-14 02:21:49.338609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.884 [2024-07-14 02:21:49.338635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.884 [2024-07-14 02:21:49.338649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.884 [2024-07-14 02:21:49.338662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.884 [2024-07-14 02:21:49.338690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-14 02:21:49.348490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.884 [2024-07-14 02:21:49.348641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.884 [2024-07-14 02:21:49.348666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.884 [2024-07-14 02:21:49.348681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.884 [2024-07-14 02:21:49.348694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.884 [2024-07-14 02:21:49.348722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-14 02:21:49.358495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.885 [2024-07-14 02:21:49.358645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.885 [2024-07-14 02:21:49.358671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.885 [2024-07-14 02:21:49.358685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.885 [2024-07-14 02:21:49.358697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.885 [2024-07-14 02:21:49.358725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-14 02:21:49.368553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.885 [2024-07-14 02:21:49.368702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.885 [2024-07-14 02:21:49.368728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.885 [2024-07-14 02:21:49.368742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.885 [2024-07-14 02:21:49.368756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.885 [2024-07-14 02:21:49.368789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-14 02:21:49.378567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.885 [2024-07-14 02:21:49.378713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.885 [2024-07-14 02:21:49.378739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.885 [2024-07-14 02:21:49.378754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.885 [2024-07-14 02:21:49.378767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.885 [2024-07-14 02:21:49.378795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-14 02:21:49.388584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.885 [2024-07-14 02:21:49.388737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.885 [2024-07-14 02:21:49.388763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.885 [2024-07-14 02:21:49.388777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.885 [2024-07-14 02:21:49.388790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.885 [2024-07-14 02:21:49.388818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-14 02:21:49.398631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.885 [2024-07-14 02:21:49.398779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.885 [2024-07-14 02:21:49.398803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.885 [2024-07-14 02:21:49.398818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.885 [2024-07-14 02:21:49.398829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.885 [2024-07-14 02:21:49.398857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-14 02:21:49.408645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.885 [2024-07-14 02:21:49.408802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.885 [2024-07-14 02:21:49.408827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.885 [2024-07-14 02:21:49.408842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.885 [2024-07-14 02:21:49.408855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.885 [2024-07-14 02:21:49.408890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-14 02:21:49.418691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.885 [2024-07-14 02:21:49.418876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.885 [2024-07-14 02:21:49.418917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.885 [2024-07-14 02:21:49.418932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.885 [2024-07-14 02:21:49.418945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.885 [2024-07-14 02:21:49.418973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-14 02:21:49.428710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.885 [2024-07-14 02:21:49.428871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.885 [2024-07-14 02:21:49.428896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.885 [2024-07-14 02:21:49.428911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.885 [2024-07-14 02:21:49.428925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.885 [2024-07-14 02:21:49.428955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-14 02:21:49.438738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.885 [2024-07-14 02:21:49.438890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.885 [2024-07-14 02:21:49.438917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.885 [2024-07-14 02:21:49.438932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.885 [2024-07-14 02:21:49.438945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.885 [2024-07-14 02:21:49.438973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-14 02:21:49.448755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.885 [2024-07-14 02:21:49.448905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.885 [2024-07-14 02:21:49.448930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.885 [2024-07-14 02:21:49.448945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.885 [2024-07-14 02:21:49.448958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.885 [2024-07-14 02:21:49.448986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-14 02:21:49.458812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.885 [2024-07-14 02:21:49.458960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.885 [2024-07-14 02:21:49.458985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.885 [2024-07-14 02:21:49.459000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.885 [2024-07-14 02:21:49.459013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.885 [2024-07-14 02:21:49.459047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-14 02:21:49.468875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.885 [2024-07-14 02:21:49.469081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.885 [2024-07-14 02:21:49.469108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.885 [2024-07-14 02:21:49.469122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.885 [2024-07-14 02:21:49.469136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.885 [2024-07-14 02:21:49.469164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-14 02:21:49.478856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.885 [2024-07-14 02:21:49.479053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.885 [2024-07-14 02:21:49.479080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.885 [2024-07-14 02:21:49.479094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.885 [2024-07-14 02:21:49.479107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.885 [2024-07-14 02:21:49.479135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-14 02:21:49.488904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.885 [2024-07-14 02:21:49.489087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.885 [2024-07-14 02:21:49.489112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.885 [2024-07-14 02:21:49.489126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.885 [2024-07-14 02:21:49.489138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.885 [2024-07-14 02:21:49.489167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-14 02:21:49.498919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.885 [2024-07-14 02:21:49.499061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.886 [2024-07-14 02:21:49.499086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.886 [2024-07-14 02:21:49.499101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.886 [2024-07-14 02:21:49.499114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.886 [2024-07-14 02:21:49.499141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-14 02:21:49.509010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.886 [2024-07-14 02:21:49.509199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.886 [2024-07-14 02:21:49.509229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.886 [2024-07-14 02:21:49.509244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.886 [2024-07-14 02:21:49.509257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.886 [2024-07-14 02:21:49.509284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-14 02:21:49.518980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.886 [2024-07-14 02:21:49.519144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.886 [2024-07-14 02:21:49.519170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.886 [2024-07-14 02:21:49.519184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.886 [2024-07-14 02:21:49.519197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.886 [2024-07-14 02:21:49.519225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-14 02:21:49.529029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.886 [2024-07-14 02:21:49.529205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.886 [2024-07-14 02:21:49.529230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.886 [2024-07-14 02:21:49.529244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.886 [2024-07-14 02:21:49.529257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.886 [2024-07-14 02:21:49.529284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-14 02:21:49.539073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.886 [2024-07-14 02:21:49.539219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.886 [2024-07-14 02:21:49.539245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.886 [2024-07-14 02:21:49.539259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.886 [2024-07-14 02:21:49.539272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.886 [2024-07-14 02:21:49.539300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-14 02:21:49.549066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.886 [2024-07-14 02:21:49.549236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.886 [2024-07-14 02:21:49.549261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.886 [2024-07-14 02:21:49.549275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.886 [2024-07-14 02:21:49.549293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.886 [2024-07-14 02:21:49.549321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-14 02:21:49.559160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.886 [2024-07-14 02:21:49.559305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.886 [2024-07-14 02:21:49.559330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.886 [2024-07-14 02:21:49.559344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.886 [2024-07-14 02:21:49.559357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.886 [2024-07-14 02:21:49.559384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-14 02:21:49.569105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.886 [2024-07-14 02:21:49.569257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.886 [2024-07-14 02:21:49.569282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.886 [2024-07-14 02:21:49.569296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.886 [2024-07-14 02:21:49.569309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:43.886 [2024-07-14 02:21:49.569337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.886 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-14 02:21:49.579148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.145 [2024-07-14 02:21:49.579313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.145 [2024-07-14 02:21:49.579339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.145 [2024-07-14 02:21:49.579361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.145 [2024-07-14 02:21:49.579388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.145 [2024-07-14 02:21:49.579424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-14 02:21:49.589177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.145 [2024-07-14 02:21:49.589322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.145 [2024-07-14 02:21:49.589348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.145 [2024-07-14 02:21:49.589362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.145 [2024-07-14 02:21:49.589375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.145 [2024-07-14 02:21:49.589403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-14 02:21:49.599321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.145 [2024-07-14 02:21:49.599479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.145 [2024-07-14 02:21:49.599505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.145 [2024-07-14 02:21:49.599519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.145 [2024-07-14 02:21:49.599532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.145 [2024-07-14 02:21:49.599560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-14 02:21:49.609224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.145 [2024-07-14 02:21:49.609376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.145 [2024-07-14 02:21:49.609401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.145 [2024-07-14 02:21:49.609416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.145 [2024-07-14 02:21:49.609429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.145 [2024-07-14 02:21:49.609457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-14 02:21:49.619227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.145 [2024-07-14 02:21:49.619369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.145 [2024-07-14 02:21:49.619394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.145 [2024-07-14 02:21:49.619409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.145 [2024-07-14 02:21:49.619422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.145 [2024-07-14 02:21:49.619450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-14 02:21:49.629290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.145 [2024-07-14 02:21:49.629447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.145 [2024-07-14 02:21:49.629472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.145 [2024-07-14 02:21:49.629486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.145 [2024-07-14 02:21:49.629499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.145 [2024-07-14 02:21:49.629526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-14 02:21:49.639330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.145 [2024-07-14 02:21:49.639484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.145 [2024-07-14 02:21:49.639509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.145 [2024-07-14 02:21:49.639524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.145 [2024-07-14 02:21:49.639542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.145 [2024-07-14 02:21:49.639570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-14 02:21:49.649346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.145 [2024-07-14 02:21:49.649492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.145 [2024-07-14 02:21:49.649517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.145 [2024-07-14 02:21:49.649531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.145 [2024-07-14 02:21:49.649544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.145 [2024-07-14 02:21:49.649572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-14 02:21:49.659361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.145 [2024-07-14 02:21:49.659509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.145 [2024-07-14 02:21:49.659534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.145 [2024-07-14 02:21:49.659549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.145 [2024-07-14 02:21:49.659562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.145 [2024-07-14 02:21:49.659590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-14 02:21:49.669420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.145 [2024-07-14 02:21:49.669601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.145 [2024-07-14 02:21:49.669626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.145 [2024-07-14 02:21:49.669641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.146 [2024-07-14 02:21:49.669654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.146 [2024-07-14 02:21:49.669682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-14 02:21:49.679435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.146 [2024-07-14 02:21:49.679587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.146 [2024-07-14 02:21:49.679613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.146 [2024-07-14 02:21:49.679627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.146 [2024-07-14 02:21:49.679641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.146 [2024-07-14 02:21:49.679669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-14 02:21:49.689453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.146 [2024-07-14 02:21:49.689608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.146 [2024-07-14 02:21:49.689634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.146 [2024-07-14 02:21:49.689649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.146 [2024-07-14 02:21:49.689661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.146 [2024-07-14 02:21:49.689689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-14 02:21:49.699440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.146 [2024-07-14 02:21:49.699591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.146 [2024-07-14 02:21:49.699615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.146 [2024-07-14 02:21:49.699630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.146 [2024-07-14 02:21:49.699643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.146 [2024-07-14 02:21:49.699671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-14 02:21:49.709540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.146 [2024-07-14 02:21:49.709714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.146 [2024-07-14 02:21:49.709739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.146 [2024-07-14 02:21:49.709754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.146 [2024-07-14 02:21:49.709767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.146 [2024-07-14 02:21:49.709794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-14 02:21:49.719501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.146 [2024-07-14 02:21:49.719647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.146 [2024-07-14 02:21:49.719672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.146 [2024-07-14 02:21:49.719687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.146 [2024-07-14 02:21:49.719699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.146 [2024-07-14 02:21:49.719728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-14 02:21:49.729551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.146 [2024-07-14 02:21:49.729728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.146 [2024-07-14 02:21:49.729754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.146 [2024-07-14 02:21:49.729776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.146 [2024-07-14 02:21:49.729793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.146 [2024-07-14 02:21:49.729822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-14 02:21:49.739582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.146 [2024-07-14 02:21:49.739734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.146 [2024-07-14 02:21:49.739761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.146 [2024-07-14 02:21:49.739775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.146 [2024-07-14 02:21:49.739788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.146 [2024-07-14 02:21:49.739816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-14 02:21:49.749609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.146 [2024-07-14 02:21:49.749765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.146 [2024-07-14 02:21:49.749791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.146 [2024-07-14 02:21:49.749806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.146 [2024-07-14 02:21:49.749818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.146 [2024-07-14 02:21:49.749846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-14 02:21:49.759618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.146 [2024-07-14 02:21:49.759764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.146 [2024-07-14 02:21:49.759790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.146 [2024-07-14 02:21:49.759804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.146 [2024-07-14 02:21:49.759817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.146 [2024-07-14 02:21:49.759845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-14 02:21:49.769643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.146 [2024-07-14 02:21:49.769804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.146 [2024-07-14 02:21:49.769830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.146 [2024-07-14 02:21:49.769844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.146 [2024-07-14 02:21:49.769857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.146 [2024-07-14 02:21:49.769895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-14 02:21:49.779664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.146 [2024-07-14 02:21:49.779807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.146 [2024-07-14 02:21:49.779831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.146 [2024-07-14 02:21:49.779846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.146 [2024-07-14 02:21:49.779859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.146 [2024-07-14 02:21:49.779895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-14 02:21:49.789698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.146 [2024-07-14 02:21:49.789850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.146 [2024-07-14 02:21:49.789882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.146 [2024-07-14 02:21:49.789897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.146 [2024-07-14 02:21:49.789910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.146 [2024-07-14 02:21:49.789938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-14 02:21:49.799853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.146 [2024-07-14 02:21:49.800023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.146 [2024-07-14 02:21:49.800049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.146 [2024-07-14 02:21:49.800063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.146 [2024-07-14 02:21:49.800076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.146 [2024-07-14 02:21:49.800104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-14 02:21:49.809847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.146 [2024-07-14 02:21:49.810053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.146 [2024-07-14 02:21:49.810079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.146 [2024-07-14 02:21:49.810094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.146 [2024-07-14 02:21:49.810106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.147 [2024-07-14 02:21:49.810134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-14 02:21:49.819846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.147 [2024-07-14 02:21:49.820006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.147 [2024-07-14 02:21:49.820032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.147 [2024-07-14 02:21:49.820052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.147 [2024-07-14 02:21:49.820066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.147 [2024-07-14 02:21:49.820094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-14 02:21:49.829855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.147 [2024-07-14 02:21:49.830021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.147 [2024-07-14 02:21:49.830047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.147 [2024-07-14 02:21:49.830061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.147 [2024-07-14 02:21:49.830074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.147 [2024-07-14 02:21:49.830102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.406 [2024-07-14 02:21:49.839832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.406 [2024-07-14 02:21:49.839993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.406 [2024-07-14 02:21:49.840020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.406 [2024-07-14 02:21:49.840035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.406 [2024-07-14 02:21:49.840048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.406 [2024-07-14 02:21:49.840076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.406 qpair failed and we were unable to recover it. 00:34:44.406 [2024-07-14 02:21:49.849876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.406 [2024-07-14 02:21:49.850026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.406 [2024-07-14 02:21:49.850052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.406 [2024-07-14 02:21:49.850066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.406 [2024-07-14 02:21:49.850080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.406 [2024-07-14 02:21:49.850108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.406 qpair failed and we were unable to recover it. 00:34:44.406 [2024-07-14 02:21:49.859927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.406 [2024-07-14 02:21:49.860115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.406 [2024-07-14 02:21:49.860140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.406 [2024-07-14 02:21:49.860154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.406 [2024-07-14 02:21:49.860168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.406 [2024-07-14 02:21:49.860195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.406 qpair failed and we were unable to recover it. 00:34:44.406 [2024-07-14 02:21:49.869961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.406 [2024-07-14 02:21:49.870115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.406 [2024-07-14 02:21:49.870140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.406 [2024-07-14 02:21:49.870154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.406 [2024-07-14 02:21:49.870166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.406 [2024-07-14 02:21:49.870194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.406 qpair failed and we were unable to recover it. 00:34:44.406 [2024-07-14 02:21:49.879952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.406 [2024-07-14 02:21:49.880101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.406 [2024-07-14 02:21:49.880125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.406 [2024-07-14 02:21:49.880140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.406 [2024-07-14 02:21:49.880152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.407 [2024-07-14 02:21:49.880183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.407 qpair failed and we were unable to recover it. 00:34:44.407 [2024-07-14 02:21:49.890005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.407 [2024-07-14 02:21:49.890158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.407 [2024-07-14 02:21:49.890184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.407 [2024-07-14 02:21:49.890198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.407 [2024-07-14 02:21:49.890211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.407 [2024-07-14 02:21:49.890239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.407 qpair failed and we were unable to recover it. 00:34:44.407 [2024-07-14 02:21:49.900131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.407 [2024-07-14 02:21:49.900290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.407 [2024-07-14 02:21:49.900315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.407 [2024-07-14 02:21:49.900330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.407 [2024-07-14 02:21:49.900343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.407 [2024-07-14 02:21:49.900371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.407 qpair failed and we were unable to recover it. 00:34:44.407 [2024-07-14 02:21:49.910061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.407 [2024-07-14 02:21:49.910212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.407 [2024-07-14 02:21:49.910237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.407 [2024-07-14 02:21:49.910258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.407 [2024-07-14 02:21:49.910272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.407 [2024-07-14 02:21:49.910299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.407 qpair failed and we were unable to recover it. 00:34:44.407 [2024-07-14 02:21:49.920080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.407 [2024-07-14 02:21:49.920231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.407 [2024-07-14 02:21:49.920256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.407 [2024-07-14 02:21:49.920270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.407 [2024-07-14 02:21:49.920283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.407 [2024-07-14 02:21:49.920311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.407 qpair failed and we were unable to recover it. 00:34:44.407 [2024-07-14 02:21:49.930096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.407 [2024-07-14 02:21:49.930242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.407 [2024-07-14 02:21:49.930266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.407 [2024-07-14 02:21:49.930280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.407 [2024-07-14 02:21:49.930293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.407 [2024-07-14 02:21:49.930321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.407 qpair failed and we were unable to recover it. 00:34:44.407 [2024-07-14 02:21:49.940142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.407 [2024-07-14 02:21:49.940286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.407 [2024-07-14 02:21:49.940311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.407 [2024-07-14 02:21:49.940325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.407 [2024-07-14 02:21:49.940339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.407 [2024-07-14 02:21:49.940366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.407 qpair failed and we were unable to recover it. 00:34:44.407 [2024-07-14 02:21:49.950168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.407 [2024-07-14 02:21:49.950322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.407 [2024-07-14 02:21:49.950347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.407 [2024-07-14 02:21:49.950362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.407 [2024-07-14 02:21:49.950375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.407 [2024-07-14 02:21:49.950402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.407 qpair failed and we were unable to recover it. 00:34:44.407 [2024-07-14 02:21:49.960195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.407 [2024-07-14 02:21:49.960344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.407 [2024-07-14 02:21:49.960369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.407 [2024-07-14 02:21:49.960384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.407 [2024-07-14 02:21:49.960397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.407 [2024-07-14 02:21:49.960427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.407 qpair failed and we were unable to recover it. 00:34:44.407 [2024-07-14 02:21:49.970211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.407 [2024-07-14 02:21:49.970374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.407 [2024-07-14 02:21:49.970399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.407 [2024-07-14 02:21:49.970414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.407 [2024-07-14 02:21:49.970427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.407 [2024-07-14 02:21:49.970454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.407 qpair failed and we were unable to recover it. 00:34:44.407 [2024-07-14 02:21:49.980293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.407 [2024-07-14 02:21:49.980440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.407 [2024-07-14 02:21:49.980465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.407 [2024-07-14 02:21:49.980480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.407 [2024-07-14 02:21:49.980493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.407 [2024-07-14 02:21:49.980521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.407 qpair failed and we were unable to recover it. 00:34:44.407 [2024-07-14 02:21:49.990282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.407 [2024-07-14 02:21:49.990431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.407 [2024-07-14 02:21:49.990456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.407 [2024-07-14 02:21:49.990471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.407 [2024-07-14 02:21:49.990484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.407 [2024-07-14 02:21:49.990511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.407 qpair failed and we were unable to recover it. 00:34:44.407 [2024-07-14 02:21:50.000303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.407 [2024-07-14 02:21:50.000454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.407 [2024-07-14 02:21:50.000484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.407 [2024-07-14 02:21:50.000499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.407 [2024-07-14 02:21:50.000512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.407 [2024-07-14 02:21:50.000540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.407 qpair failed and we were unable to recover it. 00:34:44.407 [2024-07-14 02:21:50.010419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.407 [2024-07-14 02:21:50.010582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.407 [2024-07-14 02:21:50.010611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.407 [2024-07-14 02:21:50.010626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.407 [2024-07-14 02:21:50.010640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.407 [2024-07-14 02:21:50.010671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.407 qpair failed and we were unable to recover it. 00:34:44.407 [2024-07-14 02:21:50.020416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.407 [2024-07-14 02:21:50.020570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.407 [2024-07-14 02:21:50.020596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.407 [2024-07-14 02:21:50.020611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.407 [2024-07-14 02:21:50.020625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.407 [2024-07-14 02:21:50.020654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.407 qpair failed and we were unable to recover it. 00:34:44.408 [2024-07-14 02:21:50.030458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.408 [2024-07-14 02:21:50.030638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.408 [2024-07-14 02:21:50.030666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.408 [2024-07-14 02:21:50.030681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.408 [2024-07-14 02:21:50.030695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.408 [2024-07-14 02:21:50.030725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.408 qpair failed and we were unable to recover it. 00:34:44.408 [2024-07-14 02:21:50.040458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.408 [2024-07-14 02:21:50.040618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.408 [2024-07-14 02:21:50.040644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.408 [2024-07-14 02:21:50.040659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.408 [2024-07-14 02:21:50.040672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.408 [2024-07-14 02:21:50.040710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.408 qpair failed and we were unable to recover it. 00:34:44.408 [2024-07-14 02:21:50.050536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.408 [2024-07-14 02:21:50.050689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.408 [2024-07-14 02:21:50.050715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.408 [2024-07-14 02:21:50.050729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.408 [2024-07-14 02:21:50.050743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.408 [2024-07-14 02:21:50.050771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.408 qpair failed and we were unable to recover it. 00:34:44.408 [2024-07-14 02:21:50.060520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.408 [2024-07-14 02:21:50.060669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.408 [2024-07-14 02:21:50.060695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.408 [2024-07-14 02:21:50.060709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.408 [2024-07-14 02:21:50.060722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.408 [2024-07-14 02:21:50.060750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.408 qpair failed and we were unable to recover it. 00:34:44.408 [2024-07-14 02:21:50.070547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.408 [2024-07-14 02:21:50.070704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.408 [2024-07-14 02:21:50.070730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.408 [2024-07-14 02:21:50.070750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.408 [2024-07-14 02:21:50.070764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.408 [2024-07-14 02:21:50.070792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.408 qpair failed and we were unable to recover it. 00:34:44.408 [2024-07-14 02:21:50.080527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.408 [2024-07-14 02:21:50.080684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.408 [2024-07-14 02:21:50.080709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.408 [2024-07-14 02:21:50.080724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.408 [2024-07-14 02:21:50.080737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.408 [2024-07-14 02:21:50.080764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.408 qpair failed and we were unable to recover it. 00:34:44.408 [2024-07-14 02:21:50.090570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.408 [2024-07-14 02:21:50.090716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.408 [2024-07-14 02:21:50.090747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.408 [2024-07-14 02:21:50.090762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.408 [2024-07-14 02:21:50.090775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.408 [2024-07-14 02:21:50.090804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.408 qpair failed and we were unable to recover it. 00:34:44.667 [2024-07-14 02:21:50.100596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.667 [2024-07-14 02:21:50.100805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.667 [2024-07-14 02:21:50.100836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.667 [2024-07-14 02:21:50.100873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.667 [2024-07-14 02:21:50.100903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.667 [2024-07-14 02:21:50.100940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.667 qpair failed and we were unable to recover it. 00:34:44.667 [2024-07-14 02:21:50.110645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.667 [2024-07-14 02:21:50.110820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.667 [2024-07-14 02:21:50.110845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.667 [2024-07-14 02:21:50.110860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.667 [2024-07-14 02:21:50.110879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.667 [2024-07-14 02:21:50.110909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.667 qpair failed and we were unable to recover it. 00:34:44.667 [2024-07-14 02:21:50.120647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.667 [2024-07-14 02:21:50.120796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.667 [2024-07-14 02:21:50.120822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.667 [2024-07-14 02:21:50.120836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.667 [2024-07-14 02:21:50.120849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.667 [2024-07-14 02:21:50.120885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.667 qpair failed and we were unable to recover it. 00:34:44.667 [2024-07-14 02:21:50.130687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.667 [2024-07-14 02:21:50.130843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.667 [2024-07-14 02:21:50.130875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.667 [2024-07-14 02:21:50.130892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.667 [2024-07-14 02:21:50.130906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.667 [2024-07-14 02:21:50.130941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.667 qpair failed and we were unable to recover it. 00:34:44.667 [2024-07-14 02:21:50.140692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.667 [2024-07-14 02:21:50.140844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.668 [2024-07-14 02:21:50.140878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.668 [2024-07-14 02:21:50.140894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.668 [2024-07-14 02:21:50.140907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.668 [2024-07-14 02:21:50.140935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.668 qpair failed and we were unable to recover it. 00:34:44.668 [2024-07-14 02:21:50.150758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.668 [2024-07-14 02:21:50.150920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.668 [2024-07-14 02:21:50.150946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.668 [2024-07-14 02:21:50.150960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.668 [2024-07-14 02:21:50.150974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.668 [2024-07-14 02:21:50.151002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.668 qpair failed and we were unable to recover it. 00:34:44.668 [2024-07-14 02:21:50.160747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.668 [2024-07-14 02:21:50.160900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.668 [2024-07-14 02:21:50.160925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.668 [2024-07-14 02:21:50.160940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.668 [2024-07-14 02:21:50.160953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.668 [2024-07-14 02:21:50.160981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.668 qpair failed and we were unable to recover it. 00:34:44.668 [2024-07-14 02:21:50.170789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.668 [2024-07-14 02:21:50.170943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.668 [2024-07-14 02:21:50.170969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.668 [2024-07-14 02:21:50.170983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.668 [2024-07-14 02:21:50.170997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.668 [2024-07-14 02:21:50.171025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.668 qpair failed and we were unable to recover it. 00:34:44.668 [2024-07-14 02:21:50.180825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.668 [2024-07-14 02:21:50.180979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.668 [2024-07-14 02:21:50.181009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.668 [2024-07-14 02:21:50.181025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.668 [2024-07-14 02:21:50.181038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.668 [2024-07-14 02:21:50.181066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.668 qpair failed and we were unable to recover it. 00:34:44.668 [2024-07-14 02:21:50.190861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.668 [2024-07-14 02:21:50.191025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.668 [2024-07-14 02:21:50.191050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.668 [2024-07-14 02:21:50.191064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.668 [2024-07-14 02:21:50.191077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.668 [2024-07-14 02:21:50.191106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.668 qpair failed and we were unable to recover it. 00:34:44.668 [2024-07-14 02:21:50.200886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.668 [2024-07-14 02:21:50.201081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.668 [2024-07-14 02:21:50.201106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.668 [2024-07-14 02:21:50.201121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.668 [2024-07-14 02:21:50.201134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.668 [2024-07-14 02:21:50.201161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.668 qpair failed and we were unable to recover it. 00:34:44.668 [2024-07-14 02:21:50.210892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.668 [2024-07-14 02:21:50.211035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.668 [2024-07-14 02:21:50.211060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.668 [2024-07-14 02:21:50.211074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.668 [2024-07-14 02:21:50.211087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.668 [2024-07-14 02:21:50.211115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.668 qpair failed and we were unable to recover it. 00:34:44.668 [2024-07-14 02:21:50.220946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.668 [2024-07-14 02:21:50.221094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.668 [2024-07-14 02:21:50.221119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.668 [2024-07-14 02:21:50.221133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.668 [2024-07-14 02:21:50.221146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.668 [2024-07-14 02:21:50.221179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.668 qpair failed and we were unable to recover it. 00:34:44.668 [2024-07-14 02:21:50.230986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.668 [2024-07-14 02:21:50.231145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.668 [2024-07-14 02:21:50.231170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.668 [2024-07-14 02:21:50.231185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.668 [2024-07-14 02:21:50.231198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.668 [2024-07-14 02:21:50.231225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.668 qpair failed and we were unable to recover it. 00:34:44.668 [2024-07-14 02:21:50.241005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.668 [2024-07-14 02:21:50.241169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.668 [2024-07-14 02:21:50.241193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.668 [2024-07-14 02:21:50.241208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.668 [2024-07-14 02:21:50.241221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.668 [2024-07-14 02:21:50.241249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.668 qpair failed and we were unable to recover it. 00:34:44.668 [2024-07-14 02:21:50.251044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.668 [2024-07-14 02:21:50.251189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.668 [2024-07-14 02:21:50.251214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.668 [2024-07-14 02:21:50.251228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.668 [2024-07-14 02:21:50.251241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.668 [2024-07-14 02:21:50.251269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.668 qpair failed and we were unable to recover it. 00:34:44.668 [2024-07-14 02:21:50.261050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.668 [2024-07-14 02:21:50.261196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.668 [2024-07-14 02:21:50.261220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.668 [2024-07-14 02:21:50.261235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.668 [2024-07-14 02:21:50.261248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.668 [2024-07-14 02:21:50.261275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.668 qpair failed and we were unable to recover it. 00:34:44.668 [2024-07-14 02:21:50.271070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.668 [2024-07-14 02:21:50.271223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.668 [2024-07-14 02:21:50.271254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.668 [2024-07-14 02:21:50.271269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.668 [2024-07-14 02:21:50.271282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.668 [2024-07-14 02:21:50.271310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.668 qpair failed and we were unable to recover it. 00:34:44.668 [2024-07-14 02:21:50.281104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.668 [2024-07-14 02:21:50.281289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.668 [2024-07-14 02:21:50.281314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.668 [2024-07-14 02:21:50.281328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.669 [2024-07-14 02:21:50.281342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.669 [2024-07-14 02:21:50.281369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.669 qpair failed and we were unable to recover it. 00:34:44.669 [2024-07-14 02:21:50.291142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.669 [2024-07-14 02:21:50.291295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.669 [2024-07-14 02:21:50.291320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.669 [2024-07-14 02:21:50.291334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.669 [2024-07-14 02:21:50.291347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.669 [2024-07-14 02:21:50.291375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.669 qpair failed and we were unable to recover it. 00:34:44.669 [2024-07-14 02:21:50.301196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.669 [2024-07-14 02:21:50.301344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.669 [2024-07-14 02:21:50.301370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.669 [2024-07-14 02:21:50.301385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.669 [2024-07-14 02:21:50.301399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.669 [2024-07-14 02:21:50.301427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.669 qpair failed and we were unable to recover it. 00:34:44.669 [2024-07-14 02:21:50.311249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.669 [2024-07-14 02:21:50.311441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.669 [2024-07-14 02:21:50.311466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.669 [2024-07-14 02:21:50.311481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.669 [2024-07-14 02:21:50.311499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.669 [2024-07-14 02:21:50.311528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.669 qpair failed and we were unable to recover it. 00:34:44.669 [2024-07-14 02:21:50.321229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.669 [2024-07-14 02:21:50.321381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.669 [2024-07-14 02:21:50.321405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.669 [2024-07-14 02:21:50.321419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.669 [2024-07-14 02:21:50.321432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.669 [2024-07-14 02:21:50.321460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.669 qpair failed and we were unable to recover it. 00:34:44.669 [2024-07-14 02:21:50.331308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.669 [2024-07-14 02:21:50.331454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.669 [2024-07-14 02:21:50.331479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.669 [2024-07-14 02:21:50.331494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.669 [2024-07-14 02:21:50.331507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.669 [2024-07-14 02:21:50.331534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.669 qpair failed and we were unable to recover it. 00:34:44.669 [2024-07-14 02:21:50.341306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.669 [2024-07-14 02:21:50.341453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.669 [2024-07-14 02:21:50.341478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.669 [2024-07-14 02:21:50.341493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.669 [2024-07-14 02:21:50.341505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.669 [2024-07-14 02:21:50.341532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.669 qpair failed and we were unable to recover it. 00:34:44.669 [2024-07-14 02:21:50.351302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.669 [2024-07-14 02:21:50.351452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.669 [2024-07-14 02:21:50.351477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.669 [2024-07-14 02:21:50.351491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.669 [2024-07-14 02:21:50.351504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.669 [2024-07-14 02:21:50.351532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.669 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.361380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.361553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.361581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.361595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.361607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.361635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.371408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.371563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.371589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.371604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.371617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.371645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.381421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.381575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.381600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.381615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.381628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.381658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.391440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.391614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.391639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.391653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.391667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.391694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.401435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.401583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.401607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.401621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.401639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.401667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.411472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.411620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.411645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.411660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.411673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.411700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.421516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.421689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.421714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.421728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.421741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.421770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.431573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.431765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.431791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.431805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.431818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.431845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.441558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.441732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.441758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.441773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.441786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.441814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.451609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.451761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.451786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.451801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.451814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.451842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.461617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.461766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.461791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.461806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.461819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.461847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.471702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.471906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.471933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.471948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.471961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.471989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.481697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.481872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.481897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.481912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.481925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.481953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.491753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.491908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.491933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.491954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.491968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.491996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.501713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.501878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.501904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.501918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.501931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.501959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.511820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.512032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.512058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.512073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.512086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.512114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.521888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.522037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.522062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.522077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.522091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.522119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.531856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.532037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.532064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.532079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.532096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2362f20 00:34:44.928 [2024-07-14 02:21:50.532126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.928 qpair failed and we were unable to recover it. 00:34:44.928 [2024-07-14 02:21:50.541843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.928 [2024-07-14 02:21:50.542045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.928 [2024-07-14 02:21:50.542078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.928 [2024-07-14 02:21:50.542095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.928 [2024-07-14 02:21:50.542108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:44.929 [2024-07-14 02:21:50.542139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.929 qpair failed and we were unable to recover it. 00:34:44.929 [2024-07-14 02:21:50.551945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.929 [2024-07-14 02:21:50.552098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.929 [2024-07-14 02:21:50.552126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.929 [2024-07-14 02:21:50.552141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.929 [2024-07-14 02:21:50.552154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:44.929 [2024-07-14 02:21:50.552185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.929 qpair failed and we were unable to recover it. 00:34:44.929 [2024-07-14 02:21:50.561947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.929 [2024-07-14 02:21:50.562114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.929 [2024-07-14 02:21:50.562142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.929 [2024-07-14 02:21:50.562158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.929 [2024-07-14 02:21:50.562171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:44.929 [2024-07-14 02:21:50.562221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.929 qpair failed and we were unable to recover it. 00:34:44.929 [2024-07-14 02:21:50.571968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.929 [2024-07-14 02:21:50.572122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.929 [2024-07-14 02:21:50.572149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.929 [2024-07-14 02:21:50.572163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.929 [2024-07-14 02:21:50.572177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:44.929 [2024-07-14 02:21:50.572207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.929 qpair failed and we were unable to recover it. 00:34:44.929 [2024-07-14 02:21:50.582001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.929 [2024-07-14 02:21:50.582149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.929 [2024-07-14 02:21:50.582176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.929 [2024-07-14 02:21:50.582197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.929 [2024-07-14 02:21:50.582212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:44.929 [2024-07-14 02:21:50.582243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.929 qpair failed and we were unable to recover it. 00:34:44.929 [2024-07-14 02:21:50.592024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.929 [2024-07-14 02:21:50.592191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.929 [2024-07-14 02:21:50.592217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.929 [2024-07-14 02:21:50.592232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.929 [2024-07-14 02:21:50.592246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:44.929 [2024-07-14 02:21:50.592275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.929 qpair failed and we were unable to recover it. 00:34:44.929 [2024-07-14 02:21:50.602037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.929 [2024-07-14 02:21:50.602189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.929 [2024-07-14 02:21:50.602215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.929 [2024-07-14 02:21:50.602230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.929 [2024-07-14 02:21:50.602244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:44.929 [2024-07-14 02:21:50.602273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.929 qpair failed and we were unable to recover it. 00:34:44.929 [2024-07-14 02:21:50.612089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.929 [2024-07-14 02:21:50.612251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.929 [2024-07-14 02:21:50.612278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.929 [2024-07-14 02:21:50.612293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.929 [2024-07-14 02:21:50.612307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:44.929 [2024-07-14 02:21:50.612336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.929 qpair failed and we were unable to recover it. 00:34:45.189 [2024-07-14 02:21:50.622121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.189 [2024-07-14 02:21:50.622270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.189 [2024-07-14 02:21:50.622307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.189 [2024-07-14 02:21:50.622323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.189 [2024-07-14 02:21:50.622336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.189 [2024-07-14 02:21:50.622376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.189 qpair failed and we were unable to recover it. 00:34:45.189 [2024-07-14 02:21:50.632107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.189 [2024-07-14 02:21:50.632262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.189 [2024-07-14 02:21:50.632288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.189 [2024-07-14 02:21:50.632303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.189 [2024-07-14 02:21:50.632316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.189 [2024-07-14 02:21:50.632346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.189 qpair failed and we were unable to recover it. 00:34:45.189 [2024-07-14 02:21:50.642149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.189 [2024-07-14 02:21:50.642301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.189 [2024-07-14 02:21:50.642327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.189 [2024-07-14 02:21:50.642342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.189 [2024-07-14 02:21:50.642355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.189 [2024-07-14 02:21:50.642384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.189 qpair failed and we were unable to recover it. 00:34:45.189 [2024-07-14 02:21:50.652257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.189 [2024-07-14 02:21:50.652406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.189 [2024-07-14 02:21:50.652432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.189 [2024-07-14 02:21:50.652447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.189 [2024-07-14 02:21:50.652460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.189 [2024-07-14 02:21:50.652490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.189 qpair failed and we were unable to recover it. 00:34:45.189 [2024-07-14 02:21:50.662202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.189 [2024-07-14 02:21:50.662358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.189 [2024-07-14 02:21:50.662384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.189 [2024-07-14 02:21:50.662399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.189 [2024-07-14 02:21:50.662412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.189 [2024-07-14 02:21:50.662441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.189 qpair failed and we were unable to recover it. 00:34:45.189 [2024-07-14 02:21:50.672234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.189 [2024-07-14 02:21:50.672385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.189 [2024-07-14 02:21:50.672416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.189 [2024-07-14 02:21:50.672432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.189 [2024-07-14 02:21:50.672444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.189 [2024-07-14 02:21:50.672474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.189 qpair failed and we were unable to recover it. 00:34:45.189 [2024-07-14 02:21:50.682240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.189 [2024-07-14 02:21:50.682401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.189 [2024-07-14 02:21:50.682428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.189 [2024-07-14 02:21:50.682443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.189 [2024-07-14 02:21:50.682456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.189 [2024-07-14 02:21:50.682485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.189 qpair failed and we were unable to recover it. 00:34:45.189 [2024-07-14 02:21:50.692310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.189 [2024-07-14 02:21:50.692469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.189 [2024-07-14 02:21:50.692496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.189 [2024-07-14 02:21:50.692511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.189 [2024-07-14 02:21:50.692524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.189 [2024-07-14 02:21:50.692554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.189 qpair failed and we were unable to recover it. 00:34:45.189 [2024-07-14 02:21:50.702332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.189 [2024-07-14 02:21:50.702483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.189 [2024-07-14 02:21:50.702509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.189 [2024-07-14 02:21:50.702524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.189 [2024-07-14 02:21:50.702537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.189 [2024-07-14 02:21:50.702566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.189 qpair failed and we were unable to recover it. 00:34:45.189 [2024-07-14 02:21:50.712445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.189 [2024-07-14 02:21:50.712601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.189 [2024-07-14 02:21:50.712627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.189 [2024-07-14 02:21:50.712642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.189 [2024-07-14 02:21:50.712655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.189 [2024-07-14 02:21:50.712690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.189 qpair failed and we were unable to recover it. 00:34:45.189 [2024-07-14 02:21:50.722369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.189 [2024-07-14 02:21:50.722520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.189 [2024-07-14 02:21:50.722546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.189 [2024-07-14 02:21:50.722561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.189 [2024-07-14 02:21:50.722574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.189 [2024-07-14 02:21:50.722603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.189 qpair failed and we were unable to recover it. 00:34:45.189 [2024-07-14 02:21:50.732419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.189 [2024-07-14 02:21:50.732565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.189 [2024-07-14 02:21:50.732591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.189 [2024-07-14 02:21:50.732606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.189 [2024-07-14 02:21:50.732619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.189 [2024-07-14 02:21:50.732650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.189 qpair failed and we were unable to recover it. 00:34:45.189 [2024-07-14 02:21:50.742404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.189 [2024-07-14 02:21:50.742576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.189 [2024-07-14 02:21:50.742603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.189 [2024-07-14 02:21:50.742617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.189 [2024-07-14 02:21:50.742631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.189 [2024-07-14 02:21:50.742660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.189 qpair failed and we were unable to recover it. 00:34:45.190 [2024-07-14 02:21:50.752523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.190 [2024-07-14 02:21:50.752679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.190 [2024-07-14 02:21:50.752706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.190 [2024-07-14 02:21:50.752721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.190 [2024-07-14 02:21:50.752734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.190 [2024-07-14 02:21:50.752765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.190 qpair failed and we were unable to recover it. 00:34:45.190 [2024-07-14 02:21:50.762521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.190 [2024-07-14 02:21:50.762701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.190 [2024-07-14 02:21:50.762733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.190 [2024-07-14 02:21:50.762749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.190 [2024-07-14 02:21:50.762762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.190 [2024-07-14 02:21:50.762791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.190 qpair failed and we were unable to recover it. 00:34:45.190 [2024-07-14 02:21:50.772533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.190 [2024-07-14 02:21:50.772702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.190 [2024-07-14 02:21:50.772728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.190 [2024-07-14 02:21:50.772743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.190 [2024-07-14 02:21:50.772757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.190 [2024-07-14 02:21:50.772786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.190 qpair failed and we were unable to recover it. 00:34:45.190 [2024-07-14 02:21:50.782541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.190 [2024-07-14 02:21:50.782719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.190 [2024-07-14 02:21:50.782748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.190 [2024-07-14 02:21:50.782764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.190 [2024-07-14 02:21:50.782781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.190 [2024-07-14 02:21:50.782812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.190 qpair failed and we were unable to recover it. 00:34:45.190 [2024-07-14 02:21:50.792614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.190 [2024-07-14 02:21:50.792795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.190 [2024-07-14 02:21:50.792822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.190 [2024-07-14 02:21:50.792837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.190 [2024-07-14 02:21:50.792850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.190 [2024-07-14 02:21:50.792887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.190 qpair failed and we were unable to recover it. 00:34:45.190 [2024-07-14 02:21:50.802642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.190 [2024-07-14 02:21:50.802800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.190 [2024-07-14 02:21:50.802829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.190 [2024-07-14 02:21:50.802848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.190 [2024-07-14 02:21:50.802861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.190 [2024-07-14 02:21:50.802907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.190 qpair failed and we were unable to recover it. 00:34:45.190 [2024-07-14 02:21:50.812729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.190 [2024-07-14 02:21:50.812901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.190 [2024-07-14 02:21:50.812930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.190 [2024-07-14 02:21:50.812950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.190 [2024-07-14 02:21:50.812964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.190 [2024-07-14 02:21:50.812995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.190 qpair failed and we were unable to recover it. 00:34:45.190 [2024-07-14 02:21:50.822672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.190 [2024-07-14 02:21:50.822814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.190 [2024-07-14 02:21:50.822841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.190 [2024-07-14 02:21:50.822856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.190 [2024-07-14 02:21:50.822876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.190 [2024-07-14 02:21:50.822908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.190 qpair failed and we were unable to recover it. 00:34:45.190 [2024-07-14 02:21:50.832749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.190 [2024-07-14 02:21:50.832916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.190 [2024-07-14 02:21:50.832944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.190 [2024-07-14 02:21:50.832961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.190 [2024-07-14 02:21:50.832975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.190 [2024-07-14 02:21:50.833005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.190 qpair failed and we were unable to recover it. 00:34:45.190 [2024-07-14 02:21:50.842706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.190 [2024-07-14 02:21:50.842854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.190 [2024-07-14 02:21:50.842886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.190 [2024-07-14 02:21:50.842902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.190 [2024-07-14 02:21:50.842915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.190 [2024-07-14 02:21:50.842945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.190 qpair failed and we were unable to recover it. 00:34:45.190 [2024-07-14 02:21:50.852746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.190 [2024-07-14 02:21:50.852951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.190 [2024-07-14 02:21:50.852978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.190 [2024-07-14 02:21:50.852993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.190 [2024-07-14 02:21:50.853008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.190 [2024-07-14 02:21:50.853038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.190 qpair failed and we were unable to recover it. 00:34:45.190 [2024-07-14 02:21:50.862752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.190 [2024-07-14 02:21:50.862903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.190 [2024-07-14 02:21:50.862929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.190 [2024-07-14 02:21:50.862944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.190 [2024-07-14 02:21:50.862957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.190 [2024-07-14 02:21:50.862988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.190 qpair failed and we were unable to recover it. 00:34:45.190 [2024-07-14 02:21:50.872810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.190 [2024-07-14 02:21:50.873011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.190 [2024-07-14 02:21:50.873039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.190 [2024-07-14 02:21:50.873053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.190 [2024-07-14 02:21:50.873070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.190 [2024-07-14 02:21:50.873100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.190 qpair failed and we were unable to recover it. 00:34:45.450 [2024-07-14 02:21:50.882870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.450 [2024-07-14 02:21:50.883026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.450 [2024-07-14 02:21:50.883053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.450 [2024-07-14 02:21:50.883068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.451 [2024-07-14 02:21:50.883081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.451 [2024-07-14 02:21:50.883112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.451 qpair failed and we were unable to recover it. 00:34:45.451 [2024-07-14 02:21:50.892875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.451 [2024-07-14 02:21:50.893034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.451 [2024-07-14 02:21:50.893064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.451 [2024-07-14 02:21:50.893079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.451 [2024-07-14 02:21:50.893099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.451 [2024-07-14 02:21:50.893130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.451 qpair failed and we were unable to recover it. 00:34:45.451 [2024-07-14 02:21:50.902925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.451 [2024-07-14 02:21:50.903081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.451 [2024-07-14 02:21:50.903108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.451 [2024-07-14 02:21:50.903123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.451 [2024-07-14 02:21:50.903136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.451 [2024-07-14 02:21:50.903167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.451 qpair failed and we were unable to recover it. 00:34:45.451 [2024-07-14 02:21:50.912932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.451 [2024-07-14 02:21:50.913090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.451 [2024-07-14 02:21:50.913116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.451 [2024-07-14 02:21:50.913130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.451 [2024-07-14 02:21:50.913144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.451 [2024-07-14 02:21:50.913173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.451 qpair failed and we were unable to recover it. 00:34:45.451 [2024-07-14 02:21:50.922948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.451 [2024-07-14 02:21:50.923094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.451 [2024-07-14 02:21:50.923121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.451 [2024-07-14 02:21:50.923136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.451 [2024-07-14 02:21:50.923148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.451 [2024-07-14 02:21:50.923190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.451 qpair failed and we were unable to recover it. 00:34:45.451 [2024-07-14 02:21:50.932957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.451 [2024-07-14 02:21:50.933102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.451 [2024-07-14 02:21:50.933128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.451 [2024-07-14 02:21:50.933143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.451 [2024-07-14 02:21:50.933156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.451 [2024-07-14 02:21:50.933186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.451 qpair failed and we were unable to recover it. 00:34:45.451 [2024-07-14 02:21:50.943006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.451 [2024-07-14 02:21:50.943155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.451 [2024-07-14 02:21:50.943181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.451 [2024-07-14 02:21:50.943196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.451 [2024-07-14 02:21:50.943209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.451 [2024-07-14 02:21:50.943238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.451 qpair failed and we were unable to recover it. 00:34:45.451 [2024-07-14 02:21:50.953033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.451 [2024-07-14 02:21:50.953238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.451 [2024-07-14 02:21:50.953265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.451 [2024-07-14 02:21:50.953285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.451 [2024-07-14 02:21:50.953299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.451 [2024-07-14 02:21:50.953330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.451 qpair failed and we were unable to recover it. 00:34:45.451 [2024-07-14 02:21:50.963078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.451 [2024-07-14 02:21:50.963230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.451 [2024-07-14 02:21:50.963256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.451 [2024-07-14 02:21:50.963270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.451 [2024-07-14 02:21:50.963284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.451 [2024-07-14 02:21:50.963313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.451 qpair failed and we were unable to recover it. 00:34:45.451 [2024-07-14 02:21:50.973068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.451 [2024-07-14 02:21:50.973213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.451 [2024-07-14 02:21:50.973240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.451 [2024-07-14 02:21:50.973255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.451 [2024-07-14 02:21:50.973268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.451 [2024-07-14 02:21:50.973298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.451 qpair failed and we were unable to recover it. 00:34:45.451 [2024-07-14 02:21:50.983137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.451 [2024-07-14 02:21:50.983320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.451 [2024-07-14 02:21:50.983347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.451 [2024-07-14 02:21:50.983372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.451 [2024-07-14 02:21:50.983387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.451 [2024-07-14 02:21:50.983419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.451 qpair failed and we were unable to recover it. 00:34:45.451 [2024-07-14 02:21:50.993262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.451 [2024-07-14 02:21:50.993424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.451 [2024-07-14 02:21:50.993451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.451 [2024-07-14 02:21:50.993466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.451 [2024-07-14 02:21:50.993479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.451 [2024-07-14 02:21:50.993509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.451 qpair failed and we were unable to recover it. 00:34:45.451 [2024-07-14 02:21:51.003165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.451 [2024-07-14 02:21:51.003322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.451 [2024-07-14 02:21:51.003348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.451 [2024-07-14 02:21:51.003363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.451 [2024-07-14 02:21:51.003376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.451 [2024-07-14 02:21:51.003405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.451 qpair failed and we were unable to recover it. 00:34:45.451 [2024-07-14 02:21:51.013204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.451 [2024-07-14 02:21:51.013369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.451 [2024-07-14 02:21:51.013395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.451 [2024-07-14 02:21:51.013410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.451 [2024-07-14 02:21:51.013423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.451 [2024-07-14 02:21:51.013454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.451 qpair failed and we were unable to recover it. 00:34:45.451 [2024-07-14 02:21:51.023202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.451 [2024-07-14 02:21:51.023351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.451 [2024-07-14 02:21:51.023377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.451 [2024-07-14 02:21:51.023392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.451 [2024-07-14 02:21:51.023405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.452 [2024-07-14 02:21:51.023434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.452 qpair failed and we were unable to recover it. 00:34:45.452 [2024-07-14 02:21:51.033240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.452 [2024-07-14 02:21:51.033390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.452 [2024-07-14 02:21:51.033416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.452 [2024-07-14 02:21:51.033431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.452 [2024-07-14 02:21:51.033444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8df4000b90 00:34:45.452 [2024-07-14 02:21:51.033475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.452 qpair failed and we were unable to recover it. 00:34:45.452 [2024-07-14 02:21:51.043316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.452 [2024-07-14 02:21:51.043492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.452 [2024-07-14 02:21:51.043526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.452 [2024-07-14 02:21:51.043542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.452 [2024-07-14 02:21:51.043556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.452 [2024-07-14 02:21:51.043586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.452 qpair failed and we were unable to recover it. 00:34:45.452 [2024-07-14 02:21:51.053312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.452 [2024-07-14 02:21:51.053458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.452 [2024-07-14 02:21:51.053486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.452 [2024-07-14 02:21:51.053502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.452 [2024-07-14 02:21:51.053514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.452 [2024-07-14 02:21:51.053545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.452 qpair failed and we were unable to recover it. 00:34:45.452 [2024-07-14 02:21:51.063331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.452 [2024-07-14 02:21:51.063478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.452 [2024-07-14 02:21:51.063504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.452 [2024-07-14 02:21:51.063519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.452 [2024-07-14 02:21:51.063532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.452 [2024-07-14 02:21:51.063562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.452 qpair failed and we were unable to recover it. 00:34:45.452 [2024-07-14 02:21:51.073415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.452 [2024-07-14 02:21:51.073593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.452 [2024-07-14 02:21:51.073624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.452 [2024-07-14 02:21:51.073640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.452 [2024-07-14 02:21:51.073654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.452 [2024-07-14 02:21:51.073683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.452 qpair failed and we were unable to recover it. 00:34:45.452 [2024-07-14 02:21:51.083416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.452 [2024-07-14 02:21:51.083574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.452 [2024-07-14 02:21:51.083601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.452 [2024-07-14 02:21:51.083616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.452 [2024-07-14 02:21:51.083629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.452 [2024-07-14 02:21:51.083659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.452 qpair failed and we were unable to recover it. 00:34:45.452 [2024-07-14 02:21:51.093472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.452 [2024-07-14 02:21:51.093648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.452 [2024-07-14 02:21:51.093676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.452 [2024-07-14 02:21:51.093690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.452 [2024-07-14 02:21:51.093703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.452 [2024-07-14 02:21:51.093733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.452 qpair failed and we were unable to recover it. 00:34:45.452 [2024-07-14 02:21:51.103465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.452 [2024-07-14 02:21:51.103641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.452 [2024-07-14 02:21:51.103669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.452 [2024-07-14 02:21:51.103684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.452 [2024-07-14 02:21:51.103697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.452 [2024-07-14 02:21:51.103726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.452 qpair failed and we were unable to recover it. 00:34:45.452 [2024-07-14 02:21:51.113497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.452 [2024-07-14 02:21:51.113652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.452 [2024-07-14 02:21:51.113679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.452 [2024-07-14 02:21:51.113694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.452 [2024-07-14 02:21:51.113707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.452 [2024-07-14 02:21:51.113742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.452 qpair failed and we were unable to recover it. 00:34:45.452 [2024-07-14 02:21:51.123528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.452 [2024-07-14 02:21:51.123686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.452 [2024-07-14 02:21:51.123714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.452 [2024-07-14 02:21:51.123728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.452 [2024-07-14 02:21:51.123745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.452 [2024-07-14 02:21:51.123775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.452 qpair failed and we were unable to recover it. 00:34:45.452 [2024-07-14 02:21:51.133522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.452 [2024-07-14 02:21:51.133667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.452 [2024-07-14 02:21:51.133694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.452 [2024-07-14 02:21:51.133710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.452 [2024-07-14 02:21:51.133723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.452 [2024-07-14 02:21:51.133754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.452 qpair failed and we were unable to recover it. 00:34:45.714 [2024-07-14 02:21:51.143558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.714 [2024-07-14 02:21:51.143714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.714 [2024-07-14 02:21:51.143741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.714 [2024-07-14 02:21:51.143757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.714 [2024-07-14 02:21:51.143771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.714 [2024-07-14 02:21:51.143801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.714 qpair failed and we were unable to recover it. 00:34:45.714 [2024-07-14 02:21:51.153679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.714 [2024-07-14 02:21:51.153834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.714 [2024-07-14 02:21:51.153862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.714 [2024-07-14 02:21:51.153886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.714 [2024-07-14 02:21:51.153899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.714 [2024-07-14 02:21:51.153930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.714 qpair failed and we were unable to recover it. 00:34:45.714 [2024-07-14 02:21:51.163626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.714 [2024-07-14 02:21:51.163778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.714 [2024-07-14 02:21:51.163812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.714 [2024-07-14 02:21:51.163828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.714 [2024-07-14 02:21:51.163842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.714 [2024-07-14 02:21:51.163881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.714 qpair failed and we were unable to recover it. 00:34:45.714 [2024-07-14 02:21:51.173654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.714 [2024-07-14 02:21:51.173801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.714 [2024-07-14 02:21:51.173828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.714 [2024-07-14 02:21:51.173843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.714 [2024-07-14 02:21:51.173856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.714 [2024-07-14 02:21:51.173893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.714 qpair failed and we were unable to recover it. 00:34:45.714 [2024-07-14 02:21:51.183657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.714 [2024-07-14 02:21:51.183802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.714 [2024-07-14 02:21:51.183828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.714 [2024-07-14 02:21:51.183843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.714 [2024-07-14 02:21:51.183856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.714 [2024-07-14 02:21:51.183894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.714 qpair failed and we were unable to recover it. 00:34:45.714 [2024-07-14 02:21:51.193693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.714 [2024-07-14 02:21:51.193861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.714 [2024-07-14 02:21:51.193893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.714 [2024-07-14 02:21:51.193908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.714 [2024-07-14 02:21:51.193921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.714 [2024-07-14 02:21:51.193950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.714 qpair failed and we were unable to recover it. 00:34:45.714 [2024-07-14 02:21:51.203746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.714 [2024-07-14 02:21:51.203933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.714 [2024-07-14 02:21:51.203959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.714 [2024-07-14 02:21:51.203974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.714 [2024-07-14 02:21:51.203987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.714 [2024-07-14 02:21:51.204022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.714 qpair failed and we were unable to recover it. 00:34:45.714 [2024-07-14 02:21:51.213826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.714 [2024-07-14 02:21:51.214006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.714 [2024-07-14 02:21:51.214033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.714 [2024-07-14 02:21:51.214048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.714 [2024-07-14 02:21:51.214061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.714 [2024-07-14 02:21:51.214091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.714 qpair failed and we were unable to recover it. 00:34:45.714 [2024-07-14 02:21:51.223783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.714 [2024-07-14 02:21:51.223935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.714 [2024-07-14 02:21:51.223962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.714 [2024-07-14 02:21:51.223977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.714 [2024-07-14 02:21:51.223989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.714 [2024-07-14 02:21:51.224020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.714 qpair failed and we were unable to recover it. 00:34:45.714 [2024-07-14 02:21:51.233864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.714 [2024-07-14 02:21:51.234059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.714 [2024-07-14 02:21:51.234085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.714 [2024-07-14 02:21:51.234100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.714 [2024-07-14 02:21:51.234113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.714 [2024-07-14 02:21:51.234143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.714 qpair failed and we were unable to recover it. 00:34:45.714 [2024-07-14 02:21:51.243927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.714 [2024-07-14 02:21:51.244090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.714 [2024-07-14 02:21:51.244117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.714 [2024-07-14 02:21:51.244131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.714 [2024-07-14 02:21:51.244144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.714 [2024-07-14 02:21:51.244174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.714 qpair failed and we were unable to recover it. 00:34:45.714 [2024-07-14 02:21:51.253913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.714 [2024-07-14 02:21:51.254086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.714 [2024-07-14 02:21:51.254117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.714 [2024-07-14 02:21:51.254132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.714 [2024-07-14 02:21:51.254145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.714 [2024-07-14 02:21:51.254175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.714 qpair failed and we were unable to recover it. 00:34:45.714 [2024-07-14 02:21:51.263931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.714 [2024-07-14 02:21:51.264132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.714 [2024-07-14 02:21:51.264158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.714 [2024-07-14 02:21:51.264173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.714 [2024-07-14 02:21:51.264186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.714 [2024-07-14 02:21:51.264215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.715 qpair failed and we were unable to recover it. 00:34:45.715 [2024-07-14 02:21:51.273928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.715 [2024-07-14 02:21:51.274119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.715 [2024-07-14 02:21:51.274145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.715 [2024-07-14 02:21:51.274160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.715 [2024-07-14 02:21:51.274173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.715 [2024-07-14 02:21:51.274204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.715 qpair failed and we were unable to recover it. 00:34:45.715 [2024-07-14 02:21:51.283968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.715 [2024-07-14 02:21:51.284131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.715 [2024-07-14 02:21:51.284157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.715 [2024-07-14 02:21:51.284172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.715 [2024-07-14 02:21:51.284185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.715 [2024-07-14 02:21:51.284214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.715 qpair failed and we were unable to recover it. 00:34:45.715 [2024-07-14 02:21:51.293993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.715 [2024-07-14 02:21:51.294145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.715 [2024-07-14 02:21:51.294171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.715 [2024-07-14 02:21:51.294185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.715 [2024-07-14 02:21:51.294203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.715 [2024-07-14 02:21:51.294234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.715 qpair failed and we were unable to recover it. 00:34:45.715 [2024-07-14 02:21:51.304036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.715 [2024-07-14 02:21:51.304183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.715 [2024-07-14 02:21:51.304209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.715 [2024-07-14 02:21:51.304224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.715 [2024-07-14 02:21:51.304236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.715 [2024-07-14 02:21:51.304266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.715 qpair failed and we were unable to recover it. 00:34:45.715 [2024-07-14 02:21:51.314050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.715 [2024-07-14 02:21:51.314204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.715 [2024-07-14 02:21:51.314231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.715 [2024-07-14 02:21:51.314245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.715 [2024-07-14 02:21:51.314258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.715 [2024-07-14 02:21:51.314287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.715 qpair failed and we were unable to recover it. 00:34:45.715 [2024-07-14 02:21:51.324117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.715 [2024-07-14 02:21:51.324263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.715 [2024-07-14 02:21:51.324290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.715 [2024-07-14 02:21:51.324304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.715 [2024-07-14 02:21:51.324318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.715 [2024-07-14 02:21:51.324347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.715 qpair failed and we were unable to recover it. 00:34:45.715 [2024-07-14 02:21:51.334106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.715 [2024-07-14 02:21:51.334250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.715 [2024-07-14 02:21:51.334276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.715 [2024-07-14 02:21:51.334290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.715 [2024-07-14 02:21:51.334304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.715 [2024-07-14 02:21:51.334335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.715 qpair failed and we were unable to recover it. 00:34:45.715 [2024-07-14 02:21:51.344148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.715 [2024-07-14 02:21:51.344307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.715 [2024-07-14 02:21:51.344334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.715 [2024-07-14 02:21:51.344354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.715 [2024-07-14 02:21:51.344368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.715 [2024-07-14 02:21:51.344399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.715 qpair failed and we were unable to recover it. 00:34:45.715 [2024-07-14 02:21:51.354235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.715 [2024-07-14 02:21:51.354423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.715 [2024-07-14 02:21:51.354450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.715 [2024-07-14 02:21:51.354465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.715 [2024-07-14 02:21:51.354478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.715 [2024-07-14 02:21:51.354508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.715 qpair failed and we were unable to recover it. 00:34:45.715 [2024-07-14 02:21:51.364209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.715 [2024-07-14 02:21:51.364362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.715 [2024-07-14 02:21:51.364389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.715 [2024-07-14 02:21:51.364404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.715 [2024-07-14 02:21:51.364416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.715 [2024-07-14 02:21:51.364445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.715 qpair failed and we were unable to recover it. 00:34:45.715 [2024-07-14 02:21:51.374196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.715 [2024-07-14 02:21:51.374374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.715 [2024-07-14 02:21:51.374401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.715 [2024-07-14 02:21:51.374416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.715 [2024-07-14 02:21:51.374429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.715 [2024-07-14 02:21:51.374459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.715 qpair failed and we were unable to recover it. 00:34:45.715 [2024-07-14 02:21:51.384222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.715 [2024-07-14 02:21:51.384370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.715 [2024-07-14 02:21:51.384396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.715 [2024-07-14 02:21:51.384418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.715 [2024-07-14 02:21:51.384432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.715 [2024-07-14 02:21:51.384461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.715 qpair failed and we were unable to recover it. 00:34:45.715 [2024-07-14 02:21:51.394289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.715 [2024-07-14 02:21:51.394440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.715 [2024-07-14 02:21:51.394466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.715 [2024-07-14 02:21:51.394481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.715 [2024-07-14 02:21:51.394494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.715 [2024-07-14 02:21:51.394535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.715 qpair failed and we were unable to recover it. 00:34:45.975 [2024-07-14 02:21:51.404331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.975 [2024-07-14 02:21:51.404479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.975 [2024-07-14 02:21:51.404504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.975 [2024-07-14 02:21:51.404519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.975 [2024-07-14 02:21:51.404531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.975 [2024-07-14 02:21:51.404559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.975 qpair failed and we were unable to recover it. 00:34:45.976 [2024-07-14 02:21:51.414440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.976 [2024-07-14 02:21:51.414588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.976 [2024-07-14 02:21:51.414614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.976 [2024-07-14 02:21:51.414628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.976 [2024-07-14 02:21:51.414642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.976 [2024-07-14 02:21:51.414672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.976 qpair failed and we were unable to recover it. 00:34:45.976 [2024-07-14 02:21:51.424333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.976 [2024-07-14 02:21:51.424484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.976 [2024-07-14 02:21:51.424510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.976 [2024-07-14 02:21:51.424524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.976 [2024-07-14 02:21:51.424537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.976 [2024-07-14 02:21:51.424567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.976 qpair failed and we were unable to recover it. 00:34:45.976 [2024-07-14 02:21:51.434378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.976 [2024-07-14 02:21:51.434539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.976 [2024-07-14 02:21:51.434564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.976 [2024-07-14 02:21:51.434579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.976 [2024-07-14 02:21:51.434592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.976 [2024-07-14 02:21:51.434621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.976 qpair failed and we were unable to recover it. 00:34:45.976 [2024-07-14 02:21:51.444486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.976 [2024-07-14 02:21:51.444638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.976 [2024-07-14 02:21:51.444663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.976 [2024-07-14 02:21:51.444678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.976 [2024-07-14 02:21:51.444691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.976 [2024-07-14 02:21:51.444719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.976 qpair failed and we were unable to recover it. 00:34:45.976 [2024-07-14 02:21:51.454456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.976 [2024-07-14 02:21:51.454649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.976 [2024-07-14 02:21:51.454674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.976 [2024-07-14 02:21:51.454689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.976 [2024-07-14 02:21:51.454701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.976 [2024-07-14 02:21:51.454732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.976 qpair failed and we were unable to recover it. 00:34:45.976 [2024-07-14 02:21:51.464446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.976 [2024-07-14 02:21:51.464587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.976 [2024-07-14 02:21:51.464613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.976 [2024-07-14 02:21:51.464628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.976 [2024-07-14 02:21:51.464641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.976 [2024-07-14 02:21:51.464670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.976 qpair failed and we were unable to recover it. 00:34:45.976 [2024-07-14 02:21:51.474499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.976 [2024-07-14 02:21:51.474650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.976 [2024-07-14 02:21:51.474675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.976 [2024-07-14 02:21:51.474696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.976 [2024-07-14 02:21:51.474710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.976 [2024-07-14 02:21:51.474739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.976 qpair failed and we were unable to recover it. 00:34:45.976 [2024-07-14 02:21:51.484547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.976 [2024-07-14 02:21:51.484709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.976 [2024-07-14 02:21:51.484734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.976 [2024-07-14 02:21:51.484748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.976 [2024-07-14 02:21:51.484760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.976 [2024-07-14 02:21:51.484790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.976 qpair failed and we were unable to recover it. 00:34:45.976 [2024-07-14 02:21:51.494526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.976 [2024-07-14 02:21:51.494667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.976 [2024-07-14 02:21:51.494692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.976 [2024-07-14 02:21:51.494707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.976 [2024-07-14 02:21:51.494720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.976 [2024-07-14 02:21:51.494749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.976 qpair failed and we were unable to recover it. 00:34:45.976 [2024-07-14 02:21:51.504590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.976 [2024-07-14 02:21:51.504743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.976 [2024-07-14 02:21:51.504769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.976 [2024-07-14 02:21:51.504784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.976 [2024-07-14 02:21:51.504797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.976 [2024-07-14 02:21:51.504838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.976 qpair failed and we were unable to recover it. 00:34:45.976 [2024-07-14 02:21:51.514630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.976 [2024-07-14 02:21:51.514785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.976 [2024-07-14 02:21:51.514809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.976 [2024-07-14 02:21:51.514824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.976 [2024-07-14 02:21:51.514837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.976 [2024-07-14 02:21:51.514873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.976 qpair failed and we were unable to recover it. 00:34:45.976 [2024-07-14 02:21:51.524618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.976 [2024-07-14 02:21:51.524774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.976 [2024-07-14 02:21:51.524799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.976 [2024-07-14 02:21:51.524814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.976 [2024-07-14 02:21:51.524827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.976 [2024-07-14 02:21:51.524855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.976 qpair failed and we were unable to recover it. 00:34:45.976 [2024-07-14 02:21:51.534676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.976 [2024-07-14 02:21:51.534825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.976 [2024-07-14 02:21:51.534850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.976 [2024-07-14 02:21:51.534873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.976 [2024-07-14 02:21:51.534889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.976 [2024-07-14 02:21:51.534919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.976 qpair failed and we were unable to recover it. 00:34:45.976 [2024-07-14 02:21:51.544682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.976 [2024-07-14 02:21:51.544835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.976 [2024-07-14 02:21:51.544861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.976 [2024-07-14 02:21:51.544882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.976 [2024-07-14 02:21:51.544896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.976 [2024-07-14 02:21:51.544925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.976 qpair failed and we were unable to recover it. 00:34:45.976 [2024-07-14 02:21:51.554726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.976 [2024-07-14 02:21:51.554886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.976 [2024-07-14 02:21:51.554912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.977 [2024-07-14 02:21:51.554926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.977 [2024-07-14 02:21:51.554940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.977 [2024-07-14 02:21:51.554971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.977 qpair failed and we were unable to recover it. 00:34:45.977 [2024-07-14 02:21:51.564755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.977 [2024-07-14 02:21:51.564916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.977 [2024-07-14 02:21:51.564948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.977 [2024-07-14 02:21:51.564963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.977 [2024-07-14 02:21:51.564978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.977 [2024-07-14 02:21:51.565008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.977 qpair failed and we were unable to recover it. 00:34:45.977 [2024-07-14 02:21:51.574769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.977 [2024-07-14 02:21:51.574918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.977 [2024-07-14 02:21:51.574944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.977 [2024-07-14 02:21:51.574959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.977 [2024-07-14 02:21:51.574972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.977 [2024-07-14 02:21:51.575003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.977 qpair failed and we were unable to recover it. 00:34:45.977 [2024-07-14 02:21:51.584795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.977 [2024-07-14 02:21:51.584968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.977 [2024-07-14 02:21:51.584993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.977 [2024-07-14 02:21:51.585008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.977 [2024-07-14 02:21:51.585022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.977 [2024-07-14 02:21:51.585052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.977 qpair failed and we were unable to recover it. 00:34:45.977 [2024-07-14 02:21:51.594901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.977 [2024-07-14 02:21:51.595078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.977 [2024-07-14 02:21:51.595105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.977 [2024-07-14 02:21:51.595119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.977 [2024-07-14 02:21:51.595133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.977 [2024-07-14 02:21:51.595175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.977 qpair failed and we were unable to recover it. 00:34:45.977 [2024-07-14 02:21:51.604881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.977 [2024-07-14 02:21:51.605033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.977 [2024-07-14 02:21:51.605059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.977 [2024-07-14 02:21:51.605074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.977 [2024-07-14 02:21:51.605087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.977 [2024-07-14 02:21:51.605123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.977 qpair failed and we were unable to recover it. 00:34:45.977 [2024-07-14 02:21:51.614918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.977 [2024-07-14 02:21:51.615071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.977 [2024-07-14 02:21:51.615097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.977 [2024-07-14 02:21:51.615112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.977 [2024-07-14 02:21:51.615125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.977 [2024-07-14 02:21:51.615154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.977 qpair failed and we were unable to recover it. 00:34:45.977 [2024-07-14 02:21:51.624922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.977 [2024-07-14 02:21:51.625071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.977 [2024-07-14 02:21:51.625096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.977 [2024-07-14 02:21:51.625112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.977 [2024-07-14 02:21:51.625125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.977 [2024-07-14 02:21:51.625154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.977 qpair failed and we were unable to recover it. 00:34:45.977 [2024-07-14 02:21:51.634951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.977 [2024-07-14 02:21:51.635104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.977 [2024-07-14 02:21:51.635129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.977 [2024-07-14 02:21:51.635144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.977 [2024-07-14 02:21:51.635157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.977 [2024-07-14 02:21:51.635186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.977 qpair failed and we were unable to recover it. 00:34:45.977 [2024-07-14 02:21:51.645036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.977 [2024-07-14 02:21:51.645189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.977 [2024-07-14 02:21:51.645214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.977 [2024-07-14 02:21:51.645229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.977 [2024-07-14 02:21:51.645243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.977 [2024-07-14 02:21:51.645273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.977 qpair failed and we were unable to recover it. 00:34:45.977 [2024-07-14 02:21:51.655052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.977 [2024-07-14 02:21:51.655228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.977 [2024-07-14 02:21:51.655258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.977 [2024-07-14 02:21:51.655274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.977 [2024-07-14 02:21:51.655287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.977 [2024-07-14 02:21:51.655318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.977 qpair failed and we were unable to recover it. 00:34:45.977 [2024-07-14 02:21:51.665030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.977 [2024-07-14 02:21:51.665175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.977 [2024-07-14 02:21:51.665201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.977 [2024-07-14 02:21:51.665216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.977 [2024-07-14 02:21:51.665230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:45.977 [2024-07-14 02:21:51.665259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:45.977 qpair failed and we were unable to recover it. 00:34:46.237 [2024-07-14 02:21:51.675085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.237 [2024-07-14 02:21:51.675249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.237 [2024-07-14 02:21:51.675275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.237 [2024-07-14 02:21:51.675291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.237 [2024-07-14 02:21:51.675304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.237 [2024-07-14 02:21:51.675334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.237 qpair failed and we were unable to recover it. 00:34:46.237 [2024-07-14 02:21:51.685109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.237 [2024-07-14 02:21:51.685261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.237 [2024-07-14 02:21:51.685287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.237 [2024-07-14 02:21:51.685302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.237 [2024-07-14 02:21:51.685315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.237 [2024-07-14 02:21:51.685345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.237 qpair failed and we were unable to recover it. 00:34:46.237 [2024-07-14 02:21:51.695142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.237 [2024-07-14 02:21:51.695285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.237 [2024-07-14 02:21:51.695310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.237 [2024-07-14 02:21:51.695325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.237 [2024-07-14 02:21:51.695344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.237 [2024-07-14 02:21:51.695374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.237 qpair failed and we were unable to recover it. 00:34:46.237 [2024-07-14 02:21:51.705144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.237 [2024-07-14 02:21:51.705286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.237 [2024-07-14 02:21:51.705312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.237 [2024-07-14 02:21:51.705327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.237 [2024-07-14 02:21:51.705340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.237 [2024-07-14 02:21:51.705370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.237 qpair failed and we were unable to recover it. 00:34:46.237 [2024-07-14 02:21:51.715167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.237 [2024-07-14 02:21:51.715314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.237 [2024-07-14 02:21:51.715339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.237 [2024-07-14 02:21:51.715354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.237 [2024-07-14 02:21:51.715367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.237 [2024-07-14 02:21:51.715397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.237 qpair failed and we were unable to recover it. 00:34:46.237 [2024-07-14 02:21:51.725250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.237 [2024-07-14 02:21:51.725416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.237 [2024-07-14 02:21:51.725441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.237 [2024-07-14 02:21:51.725456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.237 [2024-07-14 02:21:51.725469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.237 [2024-07-14 02:21:51.725498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.237 qpair failed and we were unable to recover it. 00:34:46.237 [2024-07-14 02:21:51.735231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.237 [2024-07-14 02:21:51.735382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.237 [2024-07-14 02:21:51.735408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.237 [2024-07-14 02:21:51.735424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.237 [2024-07-14 02:21:51.735440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.237 [2024-07-14 02:21:51.735470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.237 qpair failed and we were unable to recover it. 00:34:46.237 [2024-07-14 02:21:51.745243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.237 [2024-07-14 02:21:51.745391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.237 [2024-07-14 02:21:51.745417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.237 [2024-07-14 02:21:51.745432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.237 [2024-07-14 02:21:51.745445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.237 [2024-07-14 02:21:51.745475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.237 qpair failed and we were unable to recover it. 00:34:46.237 [2024-07-14 02:21:51.755333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.237 [2024-07-14 02:21:51.755490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.237 [2024-07-14 02:21:51.755515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.237 [2024-07-14 02:21:51.755530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.237 [2024-07-14 02:21:51.755543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.237 [2024-07-14 02:21:51.755571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.237 qpair failed and we were unable to recover it. 00:34:46.237 [2024-07-14 02:21:51.765309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.237 [2024-07-14 02:21:51.765462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.237 [2024-07-14 02:21:51.765488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.237 [2024-07-14 02:21:51.765502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.237 [2024-07-14 02:21:51.765515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.237 [2024-07-14 02:21:51.765546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.237 qpair failed and we were unable to recover it. 00:34:46.237 [2024-07-14 02:21:51.775347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.237 [2024-07-14 02:21:51.775506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.237 [2024-07-14 02:21:51.775531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.237 [2024-07-14 02:21:51.775546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.237 [2024-07-14 02:21:51.775559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.237 [2024-07-14 02:21:51.775588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.237 qpair failed and we were unable to recover it. 00:34:46.237 [2024-07-14 02:21:51.785402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.237 [2024-07-14 02:21:51.785552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.237 [2024-07-14 02:21:51.785578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.237 [2024-07-14 02:21:51.785598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.237 [2024-07-14 02:21:51.785612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.237 [2024-07-14 02:21:51.785642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.237 qpair failed and we were unable to recover it. 00:34:46.237 [2024-07-14 02:21:51.795419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.237 [2024-07-14 02:21:51.795576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.237 [2024-07-14 02:21:51.795601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.237 [2024-07-14 02:21:51.795616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.237 [2024-07-14 02:21:51.795629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.237 [2024-07-14 02:21:51.795658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.238 qpair failed and we were unable to recover it. 00:34:46.238 [2024-07-14 02:21:51.805454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.238 [2024-07-14 02:21:51.805602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.238 [2024-07-14 02:21:51.805627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.238 [2024-07-14 02:21:51.805642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.238 [2024-07-14 02:21:51.805656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.238 [2024-07-14 02:21:51.805684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.238 qpair failed and we were unable to recover it. 00:34:46.238 [2024-07-14 02:21:51.815454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.238 [2024-07-14 02:21:51.815641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.238 [2024-07-14 02:21:51.815667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.238 [2024-07-14 02:21:51.815681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.238 [2024-07-14 02:21:51.815694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.238 [2024-07-14 02:21:51.815723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.238 qpair failed and we were unable to recover it. 00:34:46.238 [2024-07-14 02:21:51.825509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.238 [2024-07-14 02:21:51.825659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.238 [2024-07-14 02:21:51.825685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.238 [2024-07-14 02:21:51.825699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.238 [2024-07-14 02:21:51.825712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.238 [2024-07-14 02:21:51.825743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.238 qpair failed and we were unable to recover it. 00:34:46.238 [2024-07-14 02:21:51.835536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.238 [2024-07-14 02:21:51.835686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.238 [2024-07-14 02:21:51.835712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.238 [2024-07-14 02:21:51.835727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.238 [2024-07-14 02:21:51.835740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.238 [2024-07-14 02:21:51.835771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.238 qpair failed and we were unable to recover it. 00:34:46.238 [2024-07-14 02:21:51.845581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.238 [2024-07-14 02:21:51.845736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.238 [2024-07-14 02:21:51.845762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.238 [2024-07-14 02:21:51.845777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.238 [2024-07-14 02:21:51.845790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.238 [2024-07-14 02:21:51.845820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.238 qpair failed and we were unable to recover it. 00:34:46.238 [2024-07-14 02:21:51.855579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.238 [2024-07-14 02:21:51.855722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.238 [2024-07-14 02:21:51.855748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.238 [2024-07-14 02:21:51.855762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.238 [2024-07-14 02:21:51.855776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.238 [2024-07-14 02:21:51.855804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.238 qpair failed and we were unable to recover it. 00:34:46.238 [2024-07-14 02:21:51.865615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.238 [2024-07-14 02:21:51.865757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.238 [2024-07-14 02:21:51.865783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.238 [2024-07-14 02:21:51.865798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.238 [2024-07-14 02:21:51.865810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.238 [2024-07-14 02:21:51.865840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.238 qpair failed and we were unable to recover it. 00:34:46.238 [2024-07-14 02:21:51.875648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.238 [2024-07-14 02:21:51.875797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.238 [2024-07-14 02:21:51.875823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.238 [2024-07-14 02:21:51.875843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.238 [2024-07-14 02:21:51.875857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.238 [2024-07-14 02:21:51.875895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.238 qpair failed and we were unable to recover it. 00:34:46.238 [2024-07-14 02:21:51.885736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.238 [2024-07-14 02:21:51.885904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.238 [2024-07-14 02:21:51.885929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.238 [2024-07-14 02:21:51.885944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.238 [2024-07-14 02:21:51.885957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.238 [2024-07-14 02:21:51.885988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.238 qpair failed and we were unable to recover it. 00:34:46.238 [2024-07-14 02:21:51.895698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.238 [2024-07-14 02:21:51.895842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.238 [2024-07-14 02:21:51.895873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.238 [2024-07-14 02:21:51.895889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.238 [2024-07-14 02:21:51.895902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.238 [2024-07-14 02:21:51.895932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.238 qpair failed and we were unable to recover it. 00:34:46.238 [2024-07-14 02:21:51.905751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.238 [2024-07-14 02:21:51.905905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.238 [2024-07-14 02:21:51.905930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.238 [2024-07-14 02:21:51.905945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.238 [2024-07-14 02:21:51.905958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.238 [2024-07-14 02:21:51.905988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.238 qpair failed and we were unable to recover it. 00:34:46.238 [2024-07-14 02:21:51.915768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.238 [2024-07-14 02:21:51.915928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.238 [2024-07-14 02:21:51.915962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.238 [2024-07-14 02:21:51.915977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.238 [2024-07-14 02:21:51.915991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.238 [2024-07-14 02:21:51.916021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.238 qpair failed and we were unable to recover it. 00:34:46.238 [2024-07-14 02:21:51.925910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.238 [2024-07-14 02:21:51.926070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.238 [2024-07-14 02:21:51.926096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.238 [2024-07-14 02:21:51.926111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.238 [2024-07-14 02:21:51.926124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.238 [2024-07-14 02:21:51.926154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.238 qpair failed and we were unable to recover it. 00:34:46.497 [2024-07-14 02:21:51.935847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.497 [2024-07-14 02:21:51.936026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.497 [2024-07-14 02:21:51.936052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.497 [2024-07-14 02:21:51.936067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.497 [2024-07-14 02:21:51.936080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.497 [2024-07-14 02:21:51.936109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.497 qpair failed and we were unable to recover it. 00:34:46.497 [2024-07-14 02:21:51.945864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.497 [2024-07-14 02:21:51.946016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.497 [2024-07-14 02:21:51.946041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.497 [2024-07-14 02:21:51.946056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.497 [2024-07-14 02:21:51.946069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.497 [2024-07-14 02:21:51.946098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.497 qpair failed and we were unable to recover it. 00:34:46.497 [2024-07-14 02:21:51.955907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.497 [2024-07-14 02:21:51.956090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.497 [2024-07-14 02:21:51.956117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.497 [2024-07-14 02:21:51.956133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.497 [2024-07-14 02:21:51.956150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.497 [2024-07-14 02:21:51.956183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.497 qpair failed and we were unable to recover it. 00:34:46.497 [2024-07-14 02:21:51.965910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.497 [2024-07-14 02:21:51.966071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.497 [2024-07-14 02:21:51.966102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.497 [2024-07-14 02:21:51.966117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.497 [2024-07-14 02:21:51.966130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.497 [2024-07-14 02:21:51.966160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.497 qpair failed and we were unable to recover it. 00:34:46.497 [2024-07-14 02:21:51.975969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.497 [2024-07-14 02:21:51.976139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.497 [2024-07-14 02:21:51.976165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.497 [2024-07-14 02:21:51.976180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.497 [2024-07-14 02:21:51.976193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.498 [2024-07-14 02:21:51.976222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.498 qpair failed and we were unable to recover it. 00:34:46.498 [2024-07-14 02:21:51.985987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.498 [2024-07-14 02:21:51.986136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.498 [2024-07-14 02:21:51.986161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.498 [2024-07-14 02:21:51.986176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.498 [2024-07-14 02:21:51.986189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.498 [2024-07-14 02:21:51.986218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.498 qpair failed and we were unable to recover it. 00:34:46.498 [2024-07-14 02:21:51.996026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.498 [2024-07-14 02:21:51.996181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.498 [2024-07-14 02:21:51.996208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.498 [2024-07-14 02:21:51.996227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.498 [2024-07-14 02:21:51.996241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.498 [2024-07-14 02:21:51.996272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.498 qpair failed and we were unable to recover it. 00:34:46.498 [2024-07-14 02:21:52.006020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.498 [2024-07-14 02:21:52.006170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.498 [2024-07-14 02:21:52.006196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.498 [2024-07-14 02:21:52.006211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.498 [2024-07-14 02:21:52.006224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.498 [2024-07-14 02:21:52.006260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.498 qpair failed and we were unable to recover it. 00:34:46.498 [2024-07-14 02:21:52.016085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.498 [2024-07-14 02:21:52.016243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.498 [2024-07-14 02:21:52.016268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.498 [2024-07-14 02:21:52.016283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.498 [2024-07-14 02:21:52.016296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.498 [2024-07-14 02:21:52.016325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.498 qpair failed and we were unable to recover it. 00:34:46.498 [2024-07-14 02:21:52.026137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.498 [2024-07-14 02:21:52.026295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.498 [2024-07-14 02:21:52.026322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.498 [2024-07-14 02:21:52.026337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.498 [2024-07-14 02:21:52.026350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.498 [2024-07-14 02:21:52.026379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.498 qpair failed and we were unable to recover it. 00:34:46.498 [2024-07-14 02:21:52.036181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.498 [2024-07-14 02:21:52.036349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.498 [2024-07-14 02:21:52.036376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.498 [2024-07-14 02:21:52.036392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.498 [2024-07-14 02:21:52.036405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.498 [2024-07-14 02:21:52.036435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.498 qpair failed and we were unable to recover it. 00:34:46.498 [2024-07-14 02:21:52.046183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.498 [2024-07-14 02:21:52.046387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.498 [2024-07-14 02:21:52.046413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.498 [2024-07-14 02:21:52.046428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.498 [2024-07-14 02:21:52.046441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.498 [2024-07-14 02:21:52.046470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.498 qpair failed and we were unable to recover it. 00:34:46.498 [2024-07-14 02:21:52.056176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.498 [2024-07-14 02:21:52.056327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.498 [2024-07-14 02:21:52.056363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.498 [2024-07-14 02:21:52.056378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.498 [2024-07-14 02:21:52.056391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.498 [2024-07-14 02:21:52.056420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.498 qpair failed and we were unable to recover it. 00:34:46.498 [2024-07-14 02:21:52.066324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.498 [2024-07-14 02:21:52.066488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.498 [2024-07-14 02:21:52.066513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.498 [2024-07-14 02:21:52.066529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.498 [2024-07-14 02:21:52.066542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.498 [2024-07-14 02:21:52.066571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.498 qpair failed and we were unable to recover it. 00:34:46.498 [2024-07-14 02:21:52.076273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.498 [2024-07-14 02:21:52.076425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.498 [2024-07-14 02:21:52.076451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.498 [2024-07-14 02:21:52.076465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.498 [2024-07-14 02:21:52.076478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.498 [2024-07-14 02:21:52.076509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.498 qpair failed and we were unable to recover it. 00:34:46.498 [2024-07-14 02:21:52.086286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.498 [2024-07-14 02:21:52.086454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.498 [2024-07-14 02:21:52.086479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.498 [2024-07-14 02:21:52.086494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.498 [2024-07-14 02:21:52.086507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.498 [2024-07-14 02:21:52.086546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.498 qpair failed and we were unable to recover it. 00:34:46.498 [2024-07-14 02:21:52.096282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.498 [2024-07-14 02:21:52.096431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.498 [2024-07-14 02:21:52.096457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.498 [2024-07-14 02:21:52.096471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.498 [2024-07-14 02:21:52.096490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.498 [2024-07-14 02:21:52.096519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.498 qpair failed and we were unable to recover it. 00:34:46.498 [2024-07-14 02:21:52.106365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.498 [2024-07-14 02:21:52.106522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.498 [2024-07-14 02:21:52.106548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.498 [2024-07-14 02:21:52.106563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.498 [2024-07-14 02:21:52.106576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.498 [2024-07-14 02:21:52.106607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.498 qpair failed and we were unable to recover it. 00:34:46.498 [2024-07-14 02:21:52.116470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.498 [2024-07-14 02:21:52.116639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.498 [2024-07-14 02:21:52.116665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.498 [2024-07-14 02:21:52.116679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.498 [2024-07-14 02:21:52.116692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.498 [2024-07-14 02:21:52.116723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.498 qpair failed and we were unable to recover it. 00:34:46.499 [2024-07-14 02:21:52.126414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.499 [2024-07-14 02:21:52.126587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.499 [2024-07-14 02:21:52.126612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.499 [2024-07-14 02:21:52.126627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.499 [2024-07-14 02:21:52.126639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.499 [2024-07-14 02:21:52.126668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.499 qpair failed and we were unable to recover it. 00:34:46.499 [2024-07-14 02:21:52.136387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.499 [2024-07-14 02:21:52.136536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.499 [2024-07-14 02:21:52.136561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.499 [2024-07-14 02:21:52.136576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.499 [2024-07-14 02:21:52.136589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.499 [2024-07-14 02:21:52.136617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.499 qpair failed and we were unable to recover it. 00:34:46.499 [2024-07-14 02:21:52.146450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.499 [2024-07-14 02:21:52.146603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.499 [2024-07-14 02:21:52.146628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.499 [2024-07-14 02:21:52.146643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.499 [2024-07-14 02:21:52.146656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.499 [2024-07-14 02:21:52.146686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.499 qpair failed and we were unable to recover it. 00:34:46.499 [2024-07-14 02:21:52.156494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.499 [2024-07-14 02:21:52.156667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.499 [2024-07-14 02:21:52.156692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.499 [2024-07-14 02:21:52.156707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.499 [2024-07-14 02:21:52.156720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.499 [2024-07-14 02:21:52.156748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.499 qpair failed and we were unable to recover it. 00:34:46.499 [2024-07-14 02:21:52.166518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.499 [2024-07-14 02:21:52.166667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.499 [2024-07-14 02:21:52.166692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.499 [2024-07-14 02:21:52.166707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.499 [2024-07-14 02:21:52.166720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.499 [2024-07-14 02:21:52.166749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.499 qpair failed and we were unable to recover it. 00:34:46.499 [2024-07-14 02:21:52.176631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.499 [2024-07-14 02:21:52.176818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.499 [2024-07-14 02:21:52.176844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.499 [2024-07-14 02:21:52.176858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.499 [2024-07-14 02:21:52.176883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.499 [2024-07-14 02:21:52.176914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.499 qpair failed and we were unable to recover it. 00:34:46.499 [2024-07-14 02:21:52.186567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.499 [2024-07-14 02:21:52.186715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.499 [2024-07-14 02:21:52.186740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.499 [2024-07-14 02:21:52.186755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.499 [2024-07-14 02:21:52.186773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.499 [2024-07-14 02:21:52.186803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.499 qpair failed and we were unable to recover it. 00:34:46.758 [2024-07-14 02:21:52.196601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.758 [2024-07-14 02:21:52.196752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.758 [2024-07-14 02:21:52.196778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.758 [2024-07-14 02:21:52.196793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.758 [2024-07-14 02:21:52.196806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.758 [2024-07-14 02:21:52.196835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.758 qpair failed and we were unable to recover it. 00:34:46.758 [2024-07-14 02:21:52.206613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.758 [2024-07-14 02:21:52.206764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.758 [2024-07-14 02:21:52.206790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.758 [2024-07-14 02:21:52.206804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.758 [2024-07-14 02:21:52.206818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.758 [2024-07-14 02:21:52.206847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.758 qpair failed and we were unable to recover it. 00:34:46.758 [2024-07-14 02:21:52.216645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.758 [2024-07-14 02:21:52.216803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.758 [2024-07-14 02:21:52.216828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.758 [2024-07-14 02:21:52.216843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.758 [2024-07-14 02:21:52.216856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.758 [2024-07-14 02:21:52.216893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.758 qpair failed and we were unable to recover it. 00:34:46.758 [2024-07-14 02:21:52.226654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.758 [2024-07-14 02:21:52.226802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.758 [2024-07-14 02:21:52.226827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.758 [2024-07-14 02:21:52.226842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.758 [2024-07-14 02:21:52.226855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.758 [2024-07-14 02:21:52.226893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.758 qpair failed and we were unable to recover it. 00:34:46.758 [2024-07-14 02:21:52.236711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.758 [2024-07-14 02:21:52.236863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.758 [2024-07-14 02:21:52.236895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.758 [2024-07-14 02:21:52.236911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.758 [2024-07-14 02:21:52.236924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.758 [2024-07-14 02:21:52.236953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.758 qpair failed and we were unable to recover it. 00:34:46.758 [2024-07-14 02:21:52.246742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.758 [2024-07-14 02:21:52.246902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.758 [2024-07-14 02:21:52.246929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.758 [2024-07-14 02:21:52.246944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.758 [2024-07-14 02:21:52.246957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.758 [2024-07-14 02:21:52.246988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.758 qpair failed and we were unable to recover it. 00:34:46.758 [2024-07-14 02:21:52.256770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.758 [2024-07-14 02:21:52.256919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.758 [2024-07-14 02:21:52.256945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.758 [2024-07-14 02:21:52.256960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.758 [2024-07-14 02:21:52.256973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.758 [2024-07-14 02:21:52.257004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.758 qpair failed and we were unable to recover it. 00:34:46.759 [2024-07-14 02:21:52.266763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.759 [2024-07-14 02:21:52.266928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.759 [2024-07-14 02:21:52.266953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.759 [2024-07-14 02:21:52.266968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.759 [2024-07-14 02:21:52.266981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.759 [2024-07-14 02:21:52.267012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.759 qpair failed and we were unable to recover it. 00:34:46.759 [2024-07-14 02:21:52.276817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.759 [2024-07-14 02:21:52.277004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.759 [2024-07-14 02:21:52.277030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.759 [2024-07-14 02:21:52.277052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.759 [2024-07-14 02:21:52.277066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.759 [2024-07-14 02:21:52.277095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.759 qpair failed and we were unable to recover it. 00:34:46.759 [2024-07-14 02:21:52.286928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.759 [2024-07-14 02:21:52.287083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.759 [2024-07-14 02:21:52.287109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.759 [2024-07-14 02:21:52.287124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.759 [2024-07-14 02:21:52.287137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.759 [2024-07-14 02:21:52.287167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.759 qpair failed and we were unable to recover it. 00:34:46.759 [2024-07-14 02:21:52.296879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.759 [2024-07-14 02:21:52.297064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.759 [2024-07-14 02:21:52.297090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.759 [2024-07-14 02:21:52.297105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.759 [2024-07-14 02:21:52.297118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.759 [2024-07-14 02:21:52.297147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.759 qpair failed and we were unable to recover it. 00:34:46.759 [2024-07-14 02:21:52.307004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.759 [2024-07-14 02:21:52.307173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.759 [2024-07-14 02:21:52.307199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.759 [2024-07-14 02:21:52.307214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.759 [2024-07-14 02:21:52.307227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.759 [2024-07-14 02:21:52.307256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.759 qpair failed and we were unable to recover it. 00:34:46.759 [2024-07-14 02:21:52.316931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.759 [2024-07-14 02:21:52.317079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.759 [2024-07-14 02:21:52.317104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.759 [2024-07-14 02:21:52.317118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.759 [2024-07-14 02:21:52.317131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.759 [2024-07-14 02:21:52.317162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.759 qpair failed and we were unable to recover it. 00:34:46.759 [2024-07-14 02:21:52.326968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.759 [2024-07-14 02:21:52.327116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.759 [2024-07-14 02:21:52.327142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.759 [2024-07-14 02:21:52.327156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.759 [2024-07-14 02:21:52.327169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.759 [2024-07-14 02:21:52.327197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.759 qpair failed and we were unable to recover it. 00:34:46.759 [2024-07-14 02:21:52.337001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.759 [2024-07-14 02:21:52.337148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.759 [2024-07-14 02:21:52.337173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.759 [2024-07-14 02:21:52.337187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.759 [2024-07-14 02:21:52.337201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.759 [2024-07-14 02:21:52.337232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.759 qpair failed and we were unable to recover it. 00:34:46.759 [2024-07-14 02:21:52.347003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.759 [2024-07-14 02:21:52.347187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.759 [2024-07-14 02:21:52.347212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.759 [2024-07-14 02:21:52.347226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.759 [2024-07-14 02:21:52.347239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.759 [2024-07-14 02:21:52.347269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.759 qpair failed and we were unable to recover it. 00:34:46.759 [2024-07-14 02:21:52.357150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.759 [2024-07-14 02:21:52.357368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.759 [2024-07-14 02:21:52.357393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.759 [2024-07-14 02:21:52.357408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.759 [2024-07-14 02:21:52.357421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.759 [2024-07-14 02:21:52.357450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.759 qpair failed and we were unable to recover it. 00:34:46.759 [2024-07-14 02:21:52.367080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.759 [2024-07-14 02:21:52.367224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.759 [2024-07-14 02:21:52.367254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.759 [2024-07-14 02:21:52.367269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.759 [2024-07-14 02:21:52.367281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.759 [2024-07-14 02:21:52.367310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.759 qpair failed and we were unable to recover it. 00:34:46.759 [2024-07-14 02:21:52.377089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.759 [2024-07-14 02:21:52.377245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.759 [2024-07-14 02:21:52.377270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.759 [2024-07-14 02:21:52.377285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.759 [2024-07-14 02:21:52.377297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.759 [2024-07-14 02:21:52.377326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.759 qpair failed and we were unable to recover it. 00:34:46.759 [2024-07-14 02:21:52.387171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.759 [2024-07-14 02:21:52.387316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.759 [2024-07-14 02:21:52.387342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.759 [2024-07-14 02:21:52.387357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.759 [2024-07-14 02:21:52.387370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.759 [2024-07-14 02:21:52.387401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.759 qpair failed and we were unable to recover it. 00:34:46.759 [2024-07-14 02:21:52.397161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.759 [2024-07-14 02:21:52.397313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.759 [2024-07-14 02:21:52.397338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.759 [2024-07-14 02:21:52.397353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.759 [2024-07-14 02:21:52.397366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.759 [2024-07-14 02:21:52.397395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.759 qpair failed and we were unable to recover it. 00:34:46.759 [2024-07-14 02:21:52.407237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.760 [2024-07-14 02:21:52.407388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.760 [2024-07-14 02:21:52.407412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.760 [2024-07-14 02:21:52.407426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.760 [2024-07-14 02:21:52.407439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.760 [2024-07-14 02:21:52.407473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.760 qpair failed and we were unable to recover it. 00:34:46.760 [2024-07-14 02:21:52.417320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.760 [2024-07-14 02:21:52.417480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.760 [2024-07-14 02:21:52.417505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.760 [2024-07-14 02:21:52.417519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.760 [2024-07-14 02:21:52.417533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.760 [2024-07-14 02:21:52.417561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.760 qpair failed and we were unable to recover it. 00:34:46.760 [2024-07-14 02:21:52.427235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.760 [2024-07-14 02:21:52.427384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.760 [2024-07-14 02:21:52.427410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.760 [2024-07-14 02:21:52.427425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.760 [2024-07-14 02:21:52.427438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.760 [2024-07-14 02:21:52.427468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.760 qpair failed and we were unable to recover it. 00:34:46.760 [2024-07-14 02:21:52.437329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.760 [2024-07-14 02:21:52.437487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.760 [2024-07-14 02:21:52.437515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.760 [2024-07-14 02:21:52.437535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.760 [2024-07-14 02:21:52.437549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.760 [2024-07-14 02:21:52.437581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.760 qpair failed and we were unable to recover it. 00:34:46.760 [2024-07-14 02:21:52.447329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.760 [2024-07-14 02:21:52.447499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.760 [2024-07-14 02:21:52.447524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.760 [2024-07-14 02:21:52.447539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.760 [2024-07-14 02:21:52.447552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:46.760 [2024-07-14 02:21:52.447581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.760 qpair failed and we were unable to recover it. 00:34:47.019 [2024-07-14 02:21:52.457375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.019 [2024-07-14 02:21:52.457537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.019 [2024-07-14 02:21:52.457568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.019 [2024-07-14 02:21:52.457584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.019 [2024-07-14 02:21:52.457597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:47.019 [2024-07-14 02:21:52.457626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:47.019 qpair failed and we were unable to recover it. 00:34:47.019 [2024-07-14 02:21:52.467391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.019 [2024-07-14 02:21:52.467552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-14 02:21:52.467578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-14 02:21:52.467593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-14 02:21:52.467605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:47.020 [2024-07-14 02:21:52.467634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-14 02:21:52.477423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-14 02:21:52.477599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-14 02:21:52.477625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-14 02:21:52.477639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-14 02:21:52.477652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:47.020 [2024-07-14 02:21:52.477683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-14 02:21:52.487406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-14 02:21:52.487548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-14 02:21:52.487574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-14 02:21:52.487588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-14 02:21:52.487602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:47.020 [2024-07-14 02:21:52.487633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-14 02:21:52.497455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-14 02:21:52.497653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-14 02:21:52.497679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-14 02:21:52.497694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-14 02:21:52.497707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:47.020 [2024-07-14 02:21:52.497743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-14 02:21:52.507502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-14 02:21:52.507659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-14 02:21:52.507686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-14 02:21:52.507700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-14 02:21:52.507713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:47.020 [2024-07-14 02:21:52.507743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-14 02:21:52.517548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-14 02:21:52.517701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-14 02:21:52.517727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-14 02:21:52.517742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-14 02:21:52.517755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:47.020 [2024-07-14 02:21:52.517784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-14 02:21:52.527576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-14 02:21:52.527776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-14 02:21:52.527802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-14 02:21:52.527817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-14 02:21:52.527830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:47.020 [2024-07-14 02:21:52.527859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-14 02:21:52.537665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-14 02:21:52.537814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-14 02:21:52.537839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-14 02:21:52.537853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-14 02:21:52.537873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:47.020 [2024-07-14 02:21:52.537904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-14 02:21:52.547625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-14 02:21:52.547818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-14 02:21:52.547843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-14 02:21:52.547858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-14 02:21:52.547876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:47.020 [2024-07-14 02:21:52.547907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-14 02:21:52.557630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-14 02:21:52.557782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-14 02:21:52.557807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-14 02:21:52.557821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-14 02:21:52.557835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:47.020 [2024-07-14 02:21:52.557872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-14 02:21:52.567644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-14 02:21:52.567792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-14 02:21:52.567817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-14 02:21:52.567832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-14 02:21:52.567846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:47.020 [2024-07-14 02:21:52.567882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-14 02:21:52.577687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-14 02:21:52.577830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-14 02:21:52.577856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-14 02:21:52.577881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-14 02:21:52.577895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:47.020 [2024-07-14 02:21:52.577925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-14 02:21:52.587697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-14 02:21:52.587845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-14 02:21:52.587877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-14 02:21:52.587894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-14 02:21:52.587912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:47.020 [2024-07-14 02:21:52.587942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-14 02:21:52.597732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-14 02:21:52.597892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-14 02:21:52.597918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-14 02:21:52.597933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-14 02:21:52.597946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e04000b90 00:34:47.020 [2024-07-14 02:21:52.597976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:47.021 qpair failed and we were unable to recover it. 00:34:47.021 [2024-07-14 02:21:52.598100] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:47.021 A controller has encountered a failure and is being reset. 00:34:47.021 [2024-07-14 02:21:52.598157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2370f20 (9): Bad file descriptor 00:34:47.021 Controller properly reset. 00:34:47.021 Initializing NVMe Controllers 00:34:47.021 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:47.021 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:47.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:47.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:47.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:47.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:47.021 Initialization complete. Launching workers. 00:34:47.021 Starting thread on core 1 00:34:47.021 Starting thread on core 2 00:34:47.021 Starting thread on core 3 00:34:47.021 Starting thread on core 0 00:34:47.021 02:21:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:47.021 00:34:47.021 real 0m10.753s 00:34:47.021 user 0m16.789s 00:34:47.021 sys 0m5.751s 00:34:47.021 02:21:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:47.021 02:21:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:47.021 ************************************ 00:34:47.021 END TEST nvmf_target_disconnect_tc2 00:34:47.021 ************************************ 00:34:47.021 02:21:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:47.021 02:21:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:47.021 02:21:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:47.021 02:21:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:47.021 02:21:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:47.021 02:21:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:47.021 02:21:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:47.021 02:21:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:47.021 02:21:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:47.021 02:21:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:47.021 rmmod nvme_tcp 00:34:47.021 rmmod nvme_fabrics 00:34:47.021 rmmod nvme_keyring 00:34:47.281 02:21:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:47.281 02:21:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:47.281 02:21:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:47.281 02:21:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1746535 ']' 00:34:47.281 02:21:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1746535 00:34:47.281 02:21:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1746535 ']' 00:34:47.281 02:21:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1746535 00:34:47.281 02:21:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:34:47.281 02:21:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:47.281 02:21:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1746535 00:34:47.281 02:21:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:34:47.281 02:21:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:34:47.281 02:21:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1746535' 00:34:47.281 killing process with pid 1746535 00:34:47.281 02:21:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1746535 00:34:47.281 02:21:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1746535 00:34:47.540 02:21:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:47.540 02:21:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:47.540 02:21:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:47.540 02:21:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:47.540 02:21:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:47.540 02:21:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.540 02:21:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:47.540 02:21:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.438 02:21:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:49.438 00:34:49.438 real 0m15.445s 00:34:49.438 user 0m42.771s 00:34:49.438 sys 0m7.668s 00:34:49.438 02:21:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:49.438 02:21:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:49.438 ************************************ 00:34:49.438 END TEST nvmf_target_disconnect 00:34:49.438 ************************************ 00:34:49.438 02:21:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:49.438 02:21:55 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:49.438 02:21:55 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:49.438 02:21:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:49.438 02:21:55 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:49.438 00:34:49.438 real 27m12.599s 00:34:49.438 user 73m53.488s 00:34:49.438 sys 6m32.081s 00:34:49.438 02:21:55 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:49.438 02:21:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:49.438 ************************************ 00:34:49.438 END TEST nvmf_tcp 00:34:49.438 ************************************ 00:34:49.438 02:21:55 -- common/autotest_common.sh@1142 -- # return 0 00:34:49.438 02:21:55 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:49.438 02:21:55 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:49.438 02:21:55 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:49.438 02:21:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:49.438 02:21:55 -- common/autotest_common.sh@10 -- # set +x 00:34:49.697 ************************************ 00:34:49.697 START TEST spdkcli_nvmf_tcp 00:34:49.697 ************************************ 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:49.697 * Looking for test storage... 00:34:49.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:49.697 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1747721 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1747721 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1747721 ']' 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:49.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:49.698 02:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:49.698 [2024-07-14 02:21:55.265795] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:49.698 [2024-07-14 02:21:55.265896] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747721 ] 00:34:49.698 EAL: No free 2048 kB hugepages reported on node 1 00:34:49.698 [2024-07-14 02:21:55.323126] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:49.966 [2024-07-14 02:21:55.409549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:49.966 [2024-07-14 02:21:55.409552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:49.966 02:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:49.966 02:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:34:49.966 02:21:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:49.966 02:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:49.966 02:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:49.966 02:21:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:49.966 02:21:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:49.966 02:21:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:49.966 02:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:49.966 02:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:49.966 02:21:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:49.966 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:49.966 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:49.966 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:49.966 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:49.966 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:49.966 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:49.966 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:49.966 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:49.966 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:49.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:49.966 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:49.966 ' 00:34:52.530 [2024-07-14 02:21:58.115999] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:53.903 [2024-07-14 02:21:59.336269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:56.432 [2024-07-14 02:22:01.607539] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:58.329 [2024-07-14 02:22:03.573694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:59.696 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:59.696 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:59.696 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:59.696 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:59.696 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:59.696 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:59.696 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:59.696 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:59.696 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:59.696 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:59.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:59.696 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:59.696 02:22:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:59.696 02:22:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:59.696 02:22:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:59.696 02:22:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:59.696 02:22:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:59.696 02:22:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:59.696 02:22:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:59.696 02:22:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:59.953 02:22:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:00.209 02:22:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:00.209 02:22:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:00.209 02:22:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:00.209 02:22:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:00.209 02:22:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:00.209 02:22:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:00.209 02:22:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:00.209 02:22:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:00.209 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:00.209 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:00.210 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:00.210 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:00.210 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:00.210 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:00.210 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:00.210 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:00.210 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:00.210 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:00.210 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:00.210 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:00.210 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:00.210 ' 00:35:05.469 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:05.469 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:05.469 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:05.469 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:05.469 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:05.469 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:05.469 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:05.469 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:05.469 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:05.469 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:05.469 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:05.470 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:05.470 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:05.470 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:05.470 02:22:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:05.470 02:22:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:05.470 02:22:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:05.470 02:22:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1747721 00:35:05.470 02:22:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1747721 ']' 00:35:05.470 02:22:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1747721 00:35:05.470 02:22:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:35:05.470 02:22:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:05.470 02:22:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1747721 00:35:05.470 02:22:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:05.470 02:22:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:05.470 02:22:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1747721' 00:35:05.470 killing process with pid 1747721 00:35:05.470 02:22:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1747721 00:35:05.470 02:22:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1747721 00:35:05.729 02:22:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:05.729 02:22:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:05.729 02:22:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1747721 ']' 00:35:05.729 02:22:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1747721 00:35:05.729 02:22:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1747721 ']' 00:35:05.729 02:22:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1747721 00:35:05.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1747721) - No such process 00:35:05.729 02:22:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1747721 is not found' 00:35:05.729 Process with pid 1747721 is not found 00:35:05.729 02:22:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:05.729 02:22:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:05.729 02:22:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:05.729 00:35:05.729 real 0m16.116s 00:35:05.729 user 0m34.205s 00:35:05.729 sys 0m0.812s 00:35:05.729 02:22:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:05.729 02:22:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:05.729 ************************************ 00:35:05.729 END TEST spdkcli_nvmf_tcp 00:35:05.729 ************************************ 00:35:05.729 02:22:11 -- common/autotest_common.sh@1142 -- # return 0 00:35:05.729 02:22:11 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:05.729 02:22:11 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:05.729 02:22:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:05.729 02:22:11 -- common/autotest_common.sh@10 -- # set +x 00:35:05.729 ************************************ 00:35:05.729 START TEST nvmf_identify_passthru 00:35:05.729 ************************************ 00:35:05.729 02:22:11 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:05.729 * Looking for test storage... 00:35:05.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:05.729 02:22:11 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:05.729 02:22:11 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:05.729 02:22:11 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:05.729 02:22:11 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:05.729 02:22:11 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.729 02:22:11 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.729 02:22:11 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.729 02:22:11 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:05.729 02:22:11 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:05.729 02:22:11 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:05.729 02:22:11 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:05.729 02:22:11 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:05.729 02:22:11 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:05.729 02:22:11 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.729 02:22:11 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.729 02:22:11 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.729 02:22:11 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:05.729 02:22:11 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.729 02:22:11 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.729 02:22:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:05.729 02:22:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:05.729 02:22:11 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:35:05.729 02:22:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:08.260 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:08.261 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:08.261 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:08.261 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:08.261 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:08.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:08.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:35:08.261 00:35:08.261 --- 10.0.0.2 ping statistics --- 00:35:08.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.261 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:08.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:08.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:35:08.261 00:35:08.261 --- 10.0.0.1 ping statistics --- 00:35:08.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.261 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:08.261 02:22:13 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:08.261 02:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:08.261 02:22:13 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:08.261 02:22:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:08.261 02:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:08.261 02:22:13 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:35:08.261 02:22:13 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:35:08.261 02:22:13 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:35:08.261 02:22:13 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:35:08.261 02:22:13 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:35:08.261 02:22:13 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:35:08.261 02:22:13 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:08.261 02:22:13 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:08.261 02:22:13 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:35:08.261 02:22:13 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:35:08.261 02:22:13 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:35:08.261 02:22:13 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:35:08.261 02:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:35:08.261 02:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:35:08.261 02:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:08.261 02:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:08.261 02:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:08.261 EAL: No free 2048 kB hugepages reported on node 1 00:35:12.445 02:22:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:35:12.445 02:22:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:12.445 02:22:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:12.445 02:22:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:12.445 EAL: No free 2048 kB hugepages reported on node 1 00:35:16.630 02:22:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:16.630 02:22:22 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:16.630 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:16.630 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:16.630 02:22:22 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:16.630 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:16.630 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:16.630 02:22:22 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1752545 00:35:16.630 02:22:22 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:16.630 02:22:22 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:16.630 02:22:22 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1752545 00:35:16.630 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1752545 ']' 00:35:16.630 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.630 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:16.630 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:16.630 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:16.630 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:16.630 [2024-07-14 02:22:22.212014] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:16.630 [2024-07-14 02:22:22.212110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:16.630 EAL: No free 2048 kB hugepages reported on node 1 00:35:16.630 [2024-07-14 02:22:22.290799] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:16.889 [2024-07-14 02:22:22.388651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:16.889 [2024-07-14 02:22:22.388722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:16.889 [2024-07-14 02:22:22.388739] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:16.889 [2024-07-14 02:22:22.388752] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:16.889 [2024-07-14 02:22:22.388763] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:16.889 [2024-07-14 02:22:22.388853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.889 [2024-07-14 02:22:22.388945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:16.889 [2024-07-14 02:22:22.388907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:16.889 [2024-07-14 02:22:22.388948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.889 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:16.889 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:35:16.889 02:22:22 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:16.889 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.889 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:16.889 INFO: Log level set to 20 00:35:16.889 INFO: Requests: 00:35:16.889 { 00:35:16.889 "jsonrpc": "2.0", 00:35:16.889 "method": "nvmf_set_config", 00:35:16.889 "id": 1, 00:35:16.889 "params": { 00:35:16.889 "admin_cmd_passthru": { 00:35:16.889 "identify_ctrlr": true 00:35:16.889 } 00:35:16.889 } 00:35:16.889 } 00:35:16.889 00:35:16.889 INFO: response: 00:35:16.889 { 00:35:16.889 "jsonrpc": "2.0", 00:35:16.889 "id": 1, 00:35:16.889 "result": true 00:35:16.889 } 00:35:16.889 00:35:16.889 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.889 02:22:22 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:16.889 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.889 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:16.889 INFO: Setting log level to 20 00:35:16.889 INFO: Setting log level to 20 00:35:16.889 INFO: Log level set to 20 00:35:16.889 INFO: Log level set to 20 00:35:16.889 INFO: Requests: 00:35:16.889 { 00:35:16.889 "jsonrpc": "2.0", 00:35:16.889 "method": "framework_start_init", 00:35:16.889 "id": 1 00:35:16.889 } 00:35:16.889 00:35:16.889 INFO: Requests: 00:35:16.889 { 00:35:16.889 "jsonrpc": "2.0", 00:35:16.889 "method": "framework_start_init", 00:35:16.889 "id": 1 00:35:16.889 } 00:35:16.889 00:35:16.889 [2024-07-14 02:22:22.565197] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:16.889 INFO: response: 00:35:16.889 { 00:35:16.889 "jsonrpc": "2.0", 00:35:16.889 "id": 1, 00:35:16.889 "result": true 00:35:16.889 } 00:35:16.889 00:35:16.889 INFO: response: 00:35:16.889 { 00:35:16.889 "jsonrpc": "2.0", 00:35:16.889 "id": 1, 00:35:16.889 "result": true 00:35:16.889 } 00:35:16.889 00:35:16.889 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.889 02:22:22 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:16.889 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.889 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:16.889 INFO: Setting log level to 40 00:35:16.889 INFO: Setting log level to 40 00:35:16.889 INFO: Setting log level to 40 00:35:16.889 [2024-07-14 02:22:22.575338] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:17.146 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.146 02:22:22 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:17.146 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:17.146 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.146 02:22:22 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:35:17.146 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.146 02:22:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:20.478 Nvme0n1 00:35:20.478 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.478 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:20.478 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.478 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:20.478 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.478 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:20.478 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.478 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:20.478 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.478 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:20.478 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.478 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:20.478 [2024-07-14 02:22:25.464031] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.478 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.478 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:20.478 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.478 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:20.478 [ 00:35:20.478 { 00:35:20.478 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:20.478 "subtype": "Discovery", 00:35:20.478 "listen_addresses": [], 00:35:20.478 "allow_any_host": true, 00:35:20.478 "hosts": [] 00:35:20.478 }, 00:35:20.478 { 00:35:20.478 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:20.478 "subtype": "NVMe", 00:35:20.478 "listen_addresses": [ 00:35:20.478 { 00:35:20.478 "trtype": "TCP", 00:35:20.478 "adrfam": "IPv4", 00:35:20.478 "traddr": "10.0.0.2", 00:35:20.478 "trsvcid": "4420" 00:35:20.478 } 00:35:20.478 ], 00:35:20.478 "allow_any_host": true, 00:35:20.478 "hosts": [], 00:35:20.478 "serial_number": "SPDK00000000000001", 00:35:20.478 "model_number": "SPDK bdev Controller", 00:35:20.478 "max_namespaces": 1, 00:35:20.478 "min_cntlid": 1, 00:35:20.478 "max_cntlid": 65519, 00:35:20.478 "namespaces": [ 00:35:20.478 { 00:35:20.478 "nsid": 1, 00:35:20.478 "bdev_name": "Nvme0n1", 00:35:20.478 "name": "Nvme0n1", 00:35:20.478 "nguid": "39C2241C2AEB49228209532544CB12AB", 00:35:20.478 "uuid": "39c2241c-2aeb-4922-8209-532544cb12ab" 00:35:20.478 } 00:35:20.478 ] 00:35:20.478 } 00:35:20.478 ] 00:35:20.478 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.478 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:20.478 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:20.478 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:20.478 EAL: No free 2048 kB hugepages reported on node 1 00:35:20.478 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:20.478 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:20.478 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:20.479 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:20.479 EAL: No free 2048 kB hugepages reported on node 1 00:35:20.479 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:20.479 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:20.479 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:20.479 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:20.479 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.479 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:20.479 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.479 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:20.479 02:22:25 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:20.479 02:22:25 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:20.479 02:22:25 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:20.479 02:22:25 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:20.479 02:22:25 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:20.479 02:22:25 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:20.479 02:22:25 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:20.479 rmmod nvme_tcp 00:35:20.479 rmmod nvme_fabrics 00:35:20.479 rmmod nvme_keyring 00:35:20.479 02:22:25 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:20.479 02:22:25 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:20.479 02:22:25 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:20.479 02:22:25 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1752545 ']' 00:35:20.479 02:22:25 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1752545 00:35:20.479 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1752545 ']' 00:35:20.479 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1752545 00:35:20.479 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:35:20.479 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:20.479 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1752545 00:35:20.479 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:20.479 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:20.479 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1752545' 00:35:20.479 killing process with pid 1752545 00:35:20.479 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1752545 00:35:20.479 02:22:25 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1752545 00:35:21.853 02:22:27 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:21.853 02:22:27 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:21.853 02:22:27 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:21.853 02:22:27 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:21.853 02:22:27 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:21.853 02:22:27 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.853 02:22:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:21.853 02:22:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:23.756 02:22:29 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:24.014 00:35:24.014 real 0m18.138s 00:35:24.014 user 0m26.633s 00:35:24.014 sys 0m2.392s 00:35:24.014 02:22:29 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:24.014 02:22:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:24.014 ************************************ 00:35:24.014 END TEST nvmf_identify_passthru 00:35:24.015 ************************************ 00:35:24.015 02:22:29 -- common/autotest_common.sh@1142 -- # return 0 00:35:24.015 02:22:29 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:24.015 02:22:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:24.015 02:22:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:24.015 02:22:29 -- common/autotest_common.sh@10 -- # set +x 00:35:24.015 ************************************ 00:35:24.015 START TEST nvmf_dif 00:35:24.015 ************************************ 00:35:24.015 02:22:29 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:24.015 * Looking for test storage... 00:35:24.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:24.015 02:22:29 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:24.015 02:22:29 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:24.015 02:22:29 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:24.015 02:22:29 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:24.015 02:22:29 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.015 02:22:29 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.015 02:22:29 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.015 02:22:29 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:24.015 02:22:29 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:24.015 02:22:29 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:24.015 02:22:29 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:24.015 02:22:29 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:24.015 02:22:29 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:24.015 02:22:29 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.015 02:22:29 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:24.015 02:22:29 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:24.015 02:22:29 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:24.015 02:22:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:26.545 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:26.545 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:26.545 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:26.545 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:26.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:26.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:35:26.545 00:35:26.545 --- 10.0.0.2 ping statistics --- 00:35:26.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.545 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:26.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:26.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:35:26.545 00:35:26.545 --- 10.0.0.1 ping statistics --- 00:35:26.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.545 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:26.545 02:22:31 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:27.112 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:27.112 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:27.112 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:27.112 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:27.112 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:27.112 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:27.112 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:27.112 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:27.112 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:27.112 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:27.112 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:27.112 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:27.112 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:27.112 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:27.112 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:27.112 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:27.112 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:27.371 02:22:32 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:27.371 02:22:32 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:27.371 02:22:32 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:27.371 02:22:32 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:27.371 02:22:32 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:27.371 02:22:32 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:27.371 02:22:32 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:27.371 02:22:32 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:27.371 02:22:32 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:27.371 02:22:32 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:27.371 02:22:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:27.371 02:22:32 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1755686 00:35:27.371 02:22:32 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:27.371 02:22:32 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1755686 00:35:27.371 02:22:32 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1755686 ']' 00:35:27.371 02:22:32 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.371 02:22:32 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:27.371 02:22:32 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.371 02:22:32 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:27.371 02:22:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:27.371 [2024-07-14 02:22:33.018059] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:27.371 [2024-07-14 02:22:33.018138] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.371 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.630 [2024-07-14 02:22:33.089890] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.630 [2024-07-14 02:22:33.175965] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.630 [2024-07-14 02:22:33.176024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.630 [2024-07-14 02:22:33.176049] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:27.630 [2024-07-14 02:22:33.176060] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:27.630 [2024-07-14 02:22:33.176071] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.630 [2024-07-14 02:22:33.176099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.630 02:22:33 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:27.630 02:22:33 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:35:27.630 02:22:33 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:27.630 02:22:33 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:27.630 02:22:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:27.630 02:22:33 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.630 02:22:33 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:27.630 02:22:33 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:27.630 02:22:33 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.630 02:22:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:27.630 [2024-07-14 02:22:33.305694] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.630 02:22:33 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.630 02:22:33 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:27.630 02:22:33 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:27.630 02:22:33 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:27.630 02:22:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:27.888 ************************************ 00:35:27.888 START TEST fio_dif_1_default 00:35:27.888 ************************************ 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:27.888 bdev_null0 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:27.888 [2024-07-14 02:22:33.361986] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:27.888 { 00:35:27.888 "params": { 00:35:27.888 "name": "Nvme$subsystem", 00:35:27.888 "trtype": "$TEST_TRANSPORT", 00:35:27.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:27.888 "adrfam": "ipv4", 00:35:27.888 "trsvcid": "$NVMF_PORT", 00:35:27.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:27.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:27.888 "hdgst": ${hdgst:-false}, 00:35:27.888 "ddgst": ${ddgst:-false} 00:35:27.888 }, 00:35:27.888 "method": "bdev_nvme_attach_controller" 00:35:27.888 } 00:35:27.888 EOF 00:35:27.888 )") 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:27.888 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:27.889 "params": { 00:35:27.889 "name": "Nvme0", 00:35:27.889 "trtype": "tcp", 00:35:27.889 "traddr": "10.0.0.2", 00:35:27.889 "adrfam": "ipv4", 00:35:27.889 "trsvcid": "4420", 00:35:27.889 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:27.889 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:27.889 "hdgst": false, 00:35:27.889 "ddgst": false 00:35:27.889 }, 00:35:27.889 "method": "bdev_nvme_attach_controller" 00:35:27.889 }' 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:27.889 02:22:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:28.146 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:28.146 fio-3.35 00:35:28.146 Starting 1 thread 00:35:28.146 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.344 00:35:40.344 filename0: (groupid=0, jobs=1): err= 0: pid=1755911: Sun Jul 14 02:22:44 2024 00:35:40.344 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10021msec) 00:35:40.344 slat (nsec): min=5286, max=72918, avg=9268.42, stdev=4498.73 00:35:40.344 clat (usec): min=1036, max=45048, avg=21567.52, stdev=20358.02 00:35:40.344 lat (usec): min=1043, max=45075, avg=21576.79, stdev=20356.75 00:35:40.344 clat percentiles (usec): 00:35:40.344 | 1.00th=[ 1057], 5.00th=[ 1074], 10.00th=[ 1090], 20.00th=[ 1123], 00:35:40.344 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[41681], 60.00th=[41681], 00:35:40.344 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:35:40.344 | 99.00th=[41681], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:40.344 | 99.99th=[44827] 00:35:40.344 bw ( KiB/s): min= 672, max= 768, per=99.89%, avg=740.80, stdev=33.28, samples=20 00:35:40.344 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:35:40.344 lat (msec) : 2=49.78%, 50=50.22% 00:35:40.344 cpu : usr=90.00%, sys=9.71%, ctx=20, majf=0, minf=275 00:35:40.344 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:40.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.344 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.344 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:40.344 00:35:40.344 Run status group 0 (all jobs): 00:35:40.344 READ: bw=741KiB/s (759kB/s), 741KiB/s-741KiB/s (759kB/s-759kB/s), io=7424KiB (7602kB), run=10021-10021msec 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.344 00:35:40.344 real 0m11.139s 00:35:40.344 user 0m10.042s 00:35:40.344 sys 0m1.250s 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:40.344 ************************************ 00:35:40.344 END TEST fio_dif_1_default 00:35:40.344 ************************************ 00:35:40.344 02:22:44 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:40.344 02:22:44 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:40.344 02:22:44 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:40.344 02:22:44 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:40.344 02:22:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:40.344 ************************************ 00:35:40.344 START TEST fio_dif_1_multi_subsystems 00:35:40.344 ************************************ 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.344 bdev_null0 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.344 [2024-07-14 02:22:44.549602] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.344 bdev_null1 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:40.344 { 00:35:40.344 "params": { 00:35:40.344 "name": "Nvme$subsystem", 00:35:40.344 "trtype": "$TEST_TRANSPORT", 00:35:40.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.344 "adrfam": "ipv4", 00:35:40.344 "trsvcid": "$NVMF_PORT", 00:35:40.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.344 "hdgst": ${hdgst:-false}, 00:35:40.344 "ddgst": ${ddgst:-false} 00:35:40.344 }, 00:35:40.344 "method": "bdev_nvme_attach_controller" 00:35:40.344 } 00:35:40.344 EOF 00:35:40.344 )") 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:40.344 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:40.345 { 00:35:40.345 "params": { 00:35:40.345 "name": "Nvme$subsystem", 00:35:40.345 "trtype": "$TEST_TRANSPORT", 00:35:40.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.345 "adrfam": "ipv4", 00:35:40.345 "trsvcid": "$NVMF_PORT", 00:35:40.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.345 "hdgst": ${hdgst:-false}, 00:35:40.345 "ddgst": ${ddgst:-false} 00:35:40.345 }, 00:35:40.345 "method": "bdev_nvme_attach_controller" 00:35:40.345 } 00:35:40.345 EOF 00:35:40.345 )") 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:40.345 "params": { 00:35:40.345 "name": "Nvme0", 00:35:40.345 "trtype": "tcp", 00:35:40.345 "traddr": "10.0.0.2", 00:35:40.345 "adrfam": "ipv4", 00:35:40.345 "trsvcid": "4420", 00:35:40.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.345 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:40.345 "hdgst": false, 00:35:40.345 "ddgst": false 00:35:40.345 }, 00:35:40.345 "method": "bdev_nvme_attach_controller" 00:35:40.345 },{ 00:35:40.345 "params": { 00:35:40.345 "name": "Nvme1", 00:35:40.345 "trtype": "tcp", 00:35:40.345 "traddr": "10.0.0.2", 00:35:40.345 "adrfam": "ipv4", 00:35:40.345 "trsvcid": "4420", 00:35:40.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:40.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:40.345 "hdgst": false, 00:35:40.345 "ddgst": false 00:35:40.345 }, 00:35:40.345 "method": "bdev_nvme_attach_controller" 00:35:40.345 }' 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:40.345 02:22:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.345 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:40.345 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:40.345 fio-3.35 00:35:40.345 Starting 2 threads 00:35:40.345 EAL: No free 2048 kB hugepages reported on node 1 00:35:50.323 00:35:50.323 filename0: (groupid=0, jobs=1): err= 0: pid=1757317: Sun Jul 14 02:22:55 2024 00:35:50.323 read: IOPS=181, BW=726KiB/s (743kB/s)(7280KiB/10027msec) 00:35:50.323 slat (nsec): min=4876, max=38242, avg=9457.82, stdev=2496.44 00:35:50.323 clat (usec): min=820, max=44672, avg=22007.59, stdev=20179.59 00:35:50.323 lat (usec): min=831, max=44687, avg=22017.05, stdev=20179.42 00:35:50.323 clat percentiles (usec): 00:35:50.323 | 1.00th=[ 840], 5.00th=[ 857], 10.00th=[ 865], 20.00th=[ 881], 00:35:50.323 | 30.00th=[ 889], 40.00th=[ 906], 50.00th=[41157], 60.00th=[41157], 00:35:50.323 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:35:50.323 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:50.323 | 99.99th=[44827] 00:35:50.323 bw ( KiB/s): min= 608, max= 768, per=65.37%, avg=726.40, stdev=45.37, samples=20 00:35:50.323 iops : min= 152, max= 192, avg=181.60, stdev=11.34, samples=20 00:35:50.323 lat (usec) : 1000=47.69% 00:35:50.323 lat (msec) : 50=52.31% 00:35:50.323 cpu : usr=94.34%, sys=5.39%, ctx=11, majf=0, minf=157 00:35:50.323 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:50.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.323 issued rwts: total=1820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.323 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:50.323 filename1: (groupid=0, jobs=1): err= 0: pid=1757318: Sun Jul 14 02:22:55 2024 00:35:50.323 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10013msec) 00:35:50.323 slat (nsec): min=5046, max=27829, avg=9788.80, stdev=2796.13 00:35:50.323 clat (usec): min=40905, max=44555, avg=41515.19, stdev=555.23 00:35:50.323 lat (usec): min=40913, max=44568, avg=41524.98, stdev=555.30 00:35:50.323 clat percentiles (usec): 00:35:50.323 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:50.323 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:35:50.323 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:50.323 | 99.00th=[42730], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:35:50.323 | 99.99th=[44303] 00:35:50.323 bw ( KiB/s): min= 352, max= 416, per=34.58%, avg=384.00, stdev=10.38, samples=20 00:35:50.323 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:35:50.323 lat (msec) : 50=100.00% 00:35:50.323 cpu : usr=91.96%, sys=5.93%, ctx=39, majf=0, minf=94 00:35:50.323 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:50.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.323 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.323 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:50.323 00:35:50.323 Run status group 0 (all jobs): 00:35:50.323 READ: bw=1111KiB/s (1137kB/s), 385KiB/s-726KiB/s (394kB/s-743kB/s), io=10.9MiB (11.4MB), run=10013-10027msec 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.323 00:35:50.323 real 0m11.217s 00:35:50.323 user 0m19.886s 00:35:50.323 sys 0m1.425s 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:50.323 02:22:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.323 ************************************ 00:35:50.323 END TEST fio_dif_1_multi_subsystems 00:35:50.323 ************************************ 00:35:50.323 02:22:55 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:50.323 02:22:55 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:50.323 02:22:55 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:50.323 02:22:55 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:50.323 02:22:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:50.323 ************************************ 00:35:50.323 START TEST fio_dif_rand_params 00:35:50.323 ************************************ 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.323 bdev_null0 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.323 [2024-07-14 02:22:55.816265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:50.323 { 00:35:50.323 "params": { 00:35:50.323 "name": "Nvme$subsystem", 00:35:50.323 "trtype": "$TEST_TRANSPORT", 00:35:50.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:50.323 "adrfam": "ipv4", 00:35:50.323 "trsvcid": "$NVMF_PORT", 00:35:50.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:50.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:50.323 "hdgst": ${hdgst:-false}, 00:35:50.323 "ddgst": ${ddgst:-false} 00:35:50.323 }, 00:35:50.323 "method": "bdev_nvme_attach_controller" 00:35:50.323 } 00:35:50.323 EOF 00:35:50.323 )") 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:50.323 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:50.324 "params": { 00:35:50.324 "name": "Nvme0", 00:35:50.324 "trtype": "tcp", 00:35:50.324 "traddr": "10.0.0.2", 00:35:50.324 "adrfam": "ipv4", 00:35:50.324 "trsvcid": "4420", 00:35:50.324 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:50.324 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:50.324 "hdgst": false, 00:35:50.324 "ddgst": false 00:35:50.324 }, 00:35:50.324 "method": "bdev_nvme_attach_controller" 00:35:50.324 }' 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:50.324 02:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.583 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:50.583 ... 00:35:50.583 fio-3.35 00:35:50.583 Starting 3 threads 00:35:50.583 EAL: No free 2048 kB hugepages reported on node 1 00:35:57.149 00:35:57.149 filename0: (groupid=0, jobs=1): err= 0: pid=1758709: Sun Jul 14 02:23:01 2024 00:35:57.150 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(142MiB/5045msec) 00:35:57.150 slat (nsec): min=4772, max=30348, avg=13036.97, stdev=2314.73 00:35:57.150 clat (usec): min=5173, max=55152, avg=13232.05, stdev=12571.44 00:35:57.150 lat (usec): min=5187, max=55167, avg=13245.09, stdev=12571.41 00:35:57.150 clat percentiles (usec): 00:35:57.150 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6915], 00:35:57.150 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9896], 00:35:57.150 | 70.00th=[11338], 80.00th=[12387], 90.00th=[16909], 95.00th=[50594], 00:35:57.150 | 99.00th=[53216], 99.50th=[53740], 99.90th=[55313], 99.95th=[55313], 00:35:57.150 | 99.99th=[55313] 00:35:57.150 bw ( KiB/s): min=15360, max=37120, per=36.61%, avg=29102.30, stdev=7032.46, samples=10 00:35:57.150 iops : min= 120, max= 290, avg=227.30, stdev=54.99, samples=10 00:35:57.150 lat (msec) : 10=61.28%, 20=28.80%, 50=3.25%, 100=6.67% 00:35:57.150 cpu : usr=92.39%, sys=7.08%, ctx=11, majf=0, minf=103 00:35:57.150 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:57.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.150 issued rwts: total=1139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:57.150 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:57.150 filename0: (groupid=0, jobs=1): err= 0: pid=1758710: Sun Jul 14 02:23:01 2024 00:35:57.150 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(134MiB/5048msec) 00:35:57.150 slat (nsec): min=5431, max=29601, avg=12468.88, stdev=2034.67 00:35:57.150 clat (usec): min=5346, max=89850, avg=14079.30, stdev=12928.97 00:35:57.150 lat (usec): min=5359, max=89870, avg=14091.77, stdev=12929.05 00:35:57.150 clat percentiles (usec): 00:35:57.150 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 6783], 20.00th=[ 7898], 00:35:57.150 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10552], 00:35:57.150 | 70.00th=[11994], 80.00th=[13042], 90.00th=[47973], 95.00th=[51119], 00:35:57.150 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55313], 99.95th=[89654], 00:35:57.150 | 99.99th=[89654] 00:35:57.150 bw ( KiB/s): min=19200, max=39936, per=34.42%, avg=27360.10, stdev=6751.33, samples=10 00:35:57.150 iops : min= 150, max= 312, avg=213.70, stdev=52.71, samples=10 00:35:57.150 lat (msec) : 10=54.25%, 20=35.57%, 50=2.99%, 100=7.19% 00:35:57.150 cpu : usr=92.83%, sys=6.74%, ctx=9, majf=0, minf=108 00:35:57.150 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:57.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.150 issued rwts: total=1071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:57.150 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:57.150 filename0: (groupid=0, jobs=1): err= 0: pid=1758711: Sun Jul 14 02:23:01 2024 00:35:57.150 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(116MiB/5006msec) 00:35:57.150 slat (nsec): min=4940, max=33666, avg=12796.32, stdev=2920.03 00:35:57.150 clat (usec): min=5187, max=93162, avg=16214.39, stdev=15375.28 00:35:57.150 lat (usec): min=5201, max=93170, avg=16227.18, stdev=15375.30 00:35:57.150 clat percentiles (usec): 00:35:57.150 | 1.00th=[ 5932], 5.00th=[ 6194], 10.00th=[ 6980], 20.00th=[ 8291], 00:35:57.150 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[10028], 60.00th=[11469], 00:35:57.150 | 70.00th=[12649], 80.00th=[13960], 90.00th=[50594], 95.00th=[52691], 00:35:57.150 | 99.00th=[54789], 99.50th=[55313], 99.90th=[92799], 99.95th=[92799], 00:35:57.150 | 99.99th=[92799] 00:35:57.150 bw ( KiB/s): min=18944, max=30208, per=29.69%, avg=23602.70, stdev=3197.89, samples=10 00:35:57.150 iops : min= 148, max= 236, avg=184.30, stdev=24.94, samples=10 00:35:57.150 lat (msec) : 10=50.16%, 20=35.24%, 50=3.57%, 100=11.03% 00:35:57.150 cpu : usr=92.95%, sys=6.51%, ctx=25, majf=0, minf=79 00:35:57.150 IO depths : 1=4.6%, 2=95.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:57.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.150 issued rwts: total=925,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:57.150 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:57.150 00:35:57.150 Run status group 0 (all jobs): 00:35:57.150 READ: bw=77.6MiB/s (81.4MB/s), 23.1MiB/s-28.2MiB/s (24.2MB/s-29.6MB/s), io=392MiB (411MB), run=5006-5048msec 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.150 bdev_null0 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.150 [2024-07-14 02:23:01.916515] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.150 bdev_null1 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.150 bdev_null2 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.150 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:57.151 { 00:35:57.151 "params": { 00:35:57.151 "name": "Nvme$subsystem", 00:35:57.151 "trtype": "$TEST_TRANSPORT", 00:35:57.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:57.151 "adrfam": "ipv4", 00:35:57.151 "trsvcid": "$NVMF_PORT", 00:35:57.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:57.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:57.151 "hdgst": ${hdgst:-false}, 00:35:57.151 "ddgst": ${ddgst:-false} 00:35:57.151 }, 00:35:57.151 "method": "bdev_nvme_attach_controller" 00:35:57.151 } 00:35:57.151 EOF 00:35:57.151 )") 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:57.151 { 00:35:57.151 "params": { 00:35:57.151 "name": "Nvme$subsystem", 00:35:57.151 "trtype": "$TEST_TRANSPORT", 00:35:57.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:57.151 "adrfam": "ipv4", 00:35:57.151 "trsvcid": "$NVMF_PORT", 00:35:57.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:57.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:57.151 "hdgst": ${hdgst:-false}, 00:35:57.151 "ddgst": ${ddgst:-false} 00:35:57.151 }, 00:35:57.151 "method": "bdev_nvme_attach_controller" 00:35:57.151 } 00:35:57.151 EOF 00:35:57.151 )") 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:57.151 { 00:35:57.151 "params": { 00:35:57.151 "name": "Nvme$subsystem", 00:35:57.151 "trtype": "$TEST_TRANSPORT", 00:35:57.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:57.151 "adrfam": "ipv4", 00:35:57.151 "trsvcid": "$NVMF_PORT", 00:35:57.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:57.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:57.151 "hdgst": ${hdgst:-false}, 00:35:57.151 "ddgst": ${ddgst:-false} 00:35:57.151 }, 00:35:57.151 "method": "bdev_nvme_attach_controller" 00:35:57.151 } 00:35:57.151 EOF 00:35:57.151 )") 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:57.151 02:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:57.151 "params": { 00:35:57.151 "name": "Nvme0", 00:35:57.151 "trtype": "tcp", 00:35:57.151 "traddr": "10.0.0.2", 00:35:57.151 "adrfam": "ipv4", 00:35:57.151 "trsvcid": "4420", 00:35:57.151 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:57.151 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:57.151 "hdgst": false, 00:35:57.151 "ddgst": false 00:35:57.151 }, 00:35:57.151 "method": "bdev_nvme_attach_controller" 00:35:57.151 },{ 00:35:57.151 "params": { 00:35:57.151 "name": "Nvme1", 00:35:57.151 "trtype": "tcp", 00:35:57.151 "traddr": "10.0.0.2", 00:35:57.151 "adrfam": "ipv4", 00:35:57.151 "trsvcid": "4420", 00:35:57.151 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:57.151 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:57.151 "hdgst": false, 00:35:57.151 "ddgst": false 00:35:57.151 }, 00:35:57.151 "method": "bdev_nvme_attach_controller" 00:35:57.151 },{ 00:35:57.151 "params": { 00:35:57.151 "name": "Nvme2", 00:35:57.151 "trtype": "tcp", 00:35:57.151 "traddr": "10.0.0.2", 00:35:57.151 "adrfam": "ipv4", 00:35:57.151 "trsvcid": "4420", 00:35:57.151 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:57.151 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:57.151 "hdgst": false, 00:35:57.151 "ddgst": false 00:35:57.151 }, 00:35:57.151 "method": "bdev_nvme_attach_controller" 00:35:57.151 }' 00:35:57.151 02:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:57.151 02:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:57.151 02:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:57.151 02:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.151 02:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:57.151 02:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:57.151 02:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:57.151 02:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:57.151 02:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:57.151 02:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.151 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:57.151 ... 00:35:57.151 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:57.151 ... 00:35:57.151 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:57.151 ... 00:35:57.151 fio-3.35 00:35:57.151 Starting 24 threads 00:35:57.151 EAL: No free 2048 kB hugepages reported on node 1 00:36:09.356 00:36:09.356 filename0: (groupid=0, jobs=1): err= 0: pid=1759564: Sun Jul 14 02:23:13 2024 00:36:09.356 read: IOPS=71, BW=287KiB/s (294kB/s)(2880KiB/10040msec) 00:36:09.356 slat (nsec): min=8898, max=70256, avg=31984.46, stdev=9473.90 00:36:09.356 clat (msec): min=130, max=341, avg=222.83, stdev=31.52 00:36:09.356 lat (msec): min=130, max=341, avg=222.86, stdev=31.53 00:36:09.356 clat percentiles (msec): 00:36:09.356 | 1.00th=[ 131], 5.00th=[ 157], 10.00th=[ 180], 20.00th=[ 205], 00:36:09.356 | 30.00th=[ 222], 40.00th=[ 226], 50.00th=[ 228], 60.00th=[ 232], 00:36:09.356 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 251], 00:36:09.356 | 99.00th=[ 300], 99.50th=[ 317], 99.90th=[ 342], 99.95th=[ 342], 00:36:09.356 | 99.99th=[ 342] 00:36:09.356 bw ( KiB/s): min= 256, max= 384, per=4.03%, avg=281.60, stdev=50.70, samples=20 00:36:09.356 iops : min= 64, max= 96, avg=70.40, stdev=12.68, samples=20 00:36:09.356 lat (msec) : 250=93.33%, 500=6.67% 00:36:09.356 cpu : usr=97.93%, sys=1.71%, ctx=15, majf=0, minf=9 00:36:09.356 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:36:09.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.356 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.356 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.356 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.356 filename0: (groupid=0, jobs=1): err= 0: pid=1759565: Sun Jul 14 02:23:13 2024 00:36:09.356 read: IOPS=69, BW=280KiB/s (286kB/s)(2808KiB/10037msec) 00:36:09.356 slat (usec): min=5, max=102, avg=67.68, stdev=14.02 00:36:09.356 clat (msec): min=66, max=369, avg=228.17, stdev=36.62 00:36:09.356 lat (msec): min=66, max=370, avg=228.24, stdev=36.62 00:36:09.356 clat percentiles (msec): 00:36:09.356 | 1.00th=[ 67], 5.00th=[ 180], 10.00th=[ 201], 20.00th=[ 215], 00:36:09.356 | 30.00th=[ 224], 40.00th=[ 226], 50.00th=[ 230], 60.00th=[ 236], 00:36:09.356 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 257], 95.00th=[ 288], 00:36:09.356 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 372], 99.95th=[ 372], 00:36:09.356 | 99.99th=[ 372] 00:36:09.356 bw ( KiB/s): min= 128, max= 384, per=3.93%, avg=274.40, stdev=69.89, samples=20 00:36:09.356 iops : min= 32, max= 96, avg=68.60, stdev=17.47, samples=20 00:36:09.356 lat (msec) : 100=1.99%, 250=85.75%, 500=12.25% 00:36:09.356 cpu : usr=98.07%, sys=1.41%, ctx=21, majf=0, minf=9 00:36:09.356 IO depths : 1=3.1%, 2=9.4%, 4=25.1%, 8=53.1%, 16=9.3%, 32=0.0%, >=64=0.0% 00:36:09.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.356 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.356 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.356 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.356 filename0: (groupid=0, jobs=1): err= 0: pid=1759566: Sun Jul 14 02:23:13 2024 00:36:09.356 read: IOPS=70, BW=281KiB/s (287kB/s)(2816KiB/10037msec) 00:36:09.356 slat (usec): min=5, max=106, avg=43.41, stdev=29.15 00:36:09.356 clat (msec): min=67, max=325, avg=227.72, stdev=33.06 00:36:09.356 lat (msec): min=67, max=325, avg=227.76, stdev=33.05 00:36:09.356 clat percentiles (msec): 00:36:09.356 | 1.00th=[ 68], 5.00th=[ 186], 10.00th=[ 203], 20.00th=[ 218], 00:36:09.356 | 30.00th=[ 224], 40.00th=[ 226], 50.00th=[ 228], 60.00th=[ 236], 00:36:09.356 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 271], 00:36:09.356 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 326], 99.95th=[ 326], 00:36:09.356 | 99.99th=[ 326] 00:36:09.356 bw ( KiB/s): min= 128, max= 384, per=3.94%, avg=275.20, stdev=75.15, samples=20 00:36:09.356 iops : min= 32, max= 96, avg=68.80, stdev=18.79, samples=20 00:36:09.356 lat (msec) : 100=2.27%, 250=90.34%, 500=7.39% 00:36:09.356 cpu : usr=97.32%, sys=1.87%, ctx=26, majf=0, minf=9 00:36:09.356 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:09.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.356 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.356 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.356 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.356 filename0: (groupid=0, jobs=1): err= 0: pid=1759567: Sun Jul 14 02:23:13 2024 00:36:09.356 read: IOPS=73, BW=293KiB/s (300kB/s)(2944KiB/10039msec) 00:36:09.356 slat (usec): min=8, max=178, avg=24.97, stdev=16.74 00:36:09.356 clat (msec): min=120, max=317, avg=218.02, stdev=34.38 00:36:09.356 lat (msec): min=120, max=317, avg=218.05, stdev=34.38 00:36:09.356 clat percentiles (msec): 00:36:09.356 | 1.00th=[ 121], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 188], 00:36:09.356 | 30.00th=[ 215], 40.00th=[ 222], 50.00th=[ 226], 60.00th=[ 230], 00:36:09.356 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 251], 00:36:09.356 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 317], 99.95th=[ 317], 00:36:09.356 | 99.99th=[ 317] 00:36:09.356 bw ( KiB/s): min= 256, max= 384, per=4.13%, avg=288.00, stdev=55.18, samples=20 00:36:09.356 iops : min= 64, max= 96, avg=72.00, stdev=13.80, samples=20 00:36:09.356 lat (msec) : 250=92.93%, 500=7.07% 00:36:09.356 cpu : usr=96.75%, sys=2.22%, ctx=51, majf=0, minf=9 00:36:09.356 IO depths : 1=4.5%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:36:09.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.356 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.356 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.356 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.356 filename0: (groupid=0, jobs=1): err= 0: pid=1759568: Sun Jul 14 02:23:13 2024 00:36:09.356 read: IOPS=74, BW=299KiB/s (306kB/s)(3008KiB/10061msec) 00:36:09.356 slat (usec): min=6, max=282, avg=51.86, stdev=20.42 00:36:09.356 clat (msec): min=18, max=292, avg=213.66, stdev=52.45 00:36:09.356 lat (msec): min=18, max=292, avg=213.71, stdev=52.45 00:36:09.356 clat percentiles (msec): 00:36:09.356 | 1.00th=[ 19], 5.00th=[ 51], 10.00th=[ 180], 20.00th=[ 209], 00:36:09.356 | 30.00th=[ 222], 40.00th=[ 224], 50.00th=[ 226], 60.00th=[ 232], 00:36:09.356 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 249], 00:36:09.356 | 99.00th=[ 271], 99.50th=[ 288], 99.90th=[ 292], 99.95th=[ 292], 00:36:09.356 | 99.99th=[ 292] 00:36:09.356 bw ( KiB/s): min= 256, max= 640, per=4.21%, avg=294.40, stdev=92.77, samples=20 00:36:09.356 iops : min= 64, max= 160, avg=73.60, stdev=23.19, samples=20 00:36:09.356 lat (msec) : 20=2.13%, 50=2.13%, 100=2.13%, 250=88.83%, 500=4.79% 00:36:09.356 cpu : usr=95.76%, sys=2.50%, ctx=193, majf=0, minf=9 00:36:09.356 IO depths : 1=2.4%, 2=8.5%, 4=25.0%, 8=54.0%, 16=10.1%, 32=0.0%, >=64=0.0% 00:36:09.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.356 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.356 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.356 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.356 filename0: (groupid=0, jobs=1): err= 0: pid=1759569: Sun Jul 14 02:23:13 2024 00:36:09.356 read: IOPS=76, BW=305KiB/s (313kB/s)(3072KiB/10064msec) 00:36:09.356 slat (usec): min=4, max=183, avg=62.05, stdev=24.44 00:36:09.356 clat (msec): min=8, max=340, avg=209.21, stdev=63.49 00:36:09.356 lat (msec): min=8, max=340, avg=209.27, stdev=63.50 00:36:09.356 clat percentiles (msec): 00:36:09.356 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 124], 20.00th=[ 203], 00:36:09.356 | 30.00th=[ 213], 40.00th=[ 222], 50.00th=[ 226], 60.00th=[ 232], 00:36:09.356 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 284], 00:36:09.356 | 99.00th=[ 300], 99.50th=[ 317], 99.90th=[ 342], 99.95th=[ 342], 00:36:09.356 | 99.99th=[ 342] 00:36:09.356 bw ( KiB/s): min= 256, max= 768, per=4.30%, avg=300.80, stdev=117.87, samples=20 00:36:09.356 iops : min= 64, max= 192, avg=75.20, stdev=29.47, samples=20 00:36:09.356 lat (msec) : 10=2.08%, 20=2.08%, 50=2.73%, 100=1.69%, 250=82.29% 00:36:09.356 lat (msec) : 500=9.11% 00:36:09.356 cpu : usr=94.38%, sys=2.95%, ctx=104, majf=0, minf=9 00:36:09.357 IO depths : 1=3.0%, 2=8.9%, 4=23.4%, 8=54.9%, 16=9.8%, 32=0.0%, >=64=0.0% 00:36:09.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.357 complete : 0=0.0%, 4=94.0%, 8=0.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.357 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.357 filename0: (groupid=0, jobs=1): err= 0: pid=1759570: Sun Jul 14 02:23:13 2024 00:36:09.357 read: IOPS=70, BW=280KiB/s (287kB/s)(2816KiB/10040msec) 00:36:09.357 slat (usec): min=15, max=209, avg=66.96, stdev=32.88 00:36:09.357 clat (msec): min=113, max=340, avg=227.66, stdev=28.30 00:36:09.357 lat (msec): min=113, max=340, avg=227.73, stdev=28.30 00:36:09.357 clat percentiles (msec): 00:36:09.357 | 1.00th=[ 159], 5.00th=[ 178], 10.00th=[ 199], 20.00th=[ 209], 00:36:09.357 | 30.00th=[ 222], 40.00th=[ 226], 50.00th=[ 228], 60.00th=[ 232], 00:36:09.357 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 284], 00:36:09.357 | 99.00th=[ 300], 99.50th=[ 317], 99.90th=[ 342], 99.95th=[ 342], 00:36:09.357 | 99.99th=[ 342] 00:36:09.357 bw ( KiB/s): min= 256, max= 384, per=3.94%, avg=275.20, stdev=44.84, samples=20 00:36:09.357 iops : min= 64, max= 96, avg=68.80, stdev=11.21, samples=20 00:36:09.357 lat (msec) : 250=90.06%, 500=9.94% 00:36:09.357 cpu : usr=94.20%, sys=3.22%, ctx=190, majf=0, minf=9 00:36:09.357 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:36:09.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.357 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.357 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.357 filename0: (groupid=0, jobs=1): err= 0: pid=1759571: Sun Jul 14 02:23:13 2024 00:36:09.357 read: IOPS=70, BW=281KiB/s (287kB/s)(2816KiB/10037msec) 00:36:09.357 slat (nsec): min=8780, max=59288, avg=21435.79, stdev=7936.18 00:36:09.357 clat (msec): min=118, max=358, avg=227.91, stdev=21.90 00:36:09.357 lat (msec): min=118, max=358, avg=227.93, stdev=21.90 00:36:09.357 clat percentiles (msec): 00:36:09.357 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 222], 00:36:09.357 | 30.00th=[ 222], 40.00th=[ 226], 50.00th=[ 228], 60.00th=[ 234], 00:36:09.357 | 70.00th=[ 236], 80.00th=[ 241], 90.00th=[ 245], 95.00th=[ 271], 00:36:09.357 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 359], 99.95th=[ 359], 00:36:09.357 | 99.99th=[ 359] 00:36:09.357 bw ( KiB/s): min= 240, max= 384, per=3.94%, avg=275.20, stdev=45.14, samples=20 00:36:09.357 iops : min= 60, max= 96, avg=68.80, stdev=11.28, samples=20 00:36:09.357 lat (msec) : 250=92.61%, 500=7.39% 00:36:09.357 cpu : usr=97.88%, sys=1.75%, ctx=17, majf=0, minf=9 00:36:09.357 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:09.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.357 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.357 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.357 filename1: (groupid=0, jobs=1): err= 0: pid=1759572: Sun Jul 14 02:23:13 2024 00:36:09.357 read: IOPS=71, BW=287KiB/s (294kB/s)(2880KiB/10048msec) 00:36:09.357 slat (usec): min=12, max=369, avg=69.08, stdev=34.69 00:36:09.357 clat (msec): min=66, max=297, avg=222.72, stdev=33.98 00:36:09.357 lat (msec): min=66, max=297, avg=222.79, stdev=33.98 00:36:09.357 clat percentiles (msec): 00:36:09.357 | 1.00th=[ 67], 5.00th=[ 157], 10.00th=[ 180], 20.00th=[ 209], 00:36:09.357 | 30.00th=[ 224], 40.00th=[ 224], 50.00th=[ 228], 60.00th=[ 234], 00:36:09.357 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 271], 00:36:09.357 | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:36:09.357 | 99.99th=[ 296] 00:36:09.357 bw ( KiB/s): min= 128, max= 384, per=4.03%, avg=281.60, stdev=65.54, samples=20 00:36:09.357 iops : min= 32, max= 96, avg=70.40, stdev=16.38, samples=20 00:36:09.357 lat (msec) : 100=2.22%, 250=89.72%, 500=8.06% 00:36:09.357 cpu : usr=95.25%, sys=2.43%, ctx=210, majf=0, minf=9 00:36:09.357 IO depths : 1=4.3%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:36:09.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.357 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.357 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.357 filename1: (groupid=0, jobs=1): err= 0: pid=1759573: Sun Jul 14 02:23:13 2024 00:36:09.357 read: IOPS=71, BW=287KiB/s (294kB/s)(2880KiB/10040msec) 00:36:09.357 slat (usec): min=8, max=172, avg=54.15, stdev=24.75 00:36:09.357 clat (msec): min=127, max=299, avg=222.63, stdev=25.43 00:36:09.357 lat (msec): min=127, max=299, avg=222.68, stdev=25.44 00:36:09.357 clat percentiles (msec): 00:36:09.357 | 1.00th=[ 129], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 209], 00:36:09.357 | 30.00th=[ 222], 40.00th=[ 224], 50.00th=[ 226], 60.00th=[ 232], 00:36:09.357 | 70.00th=[ 236], 80.00th=[ 241], 90.00th=[ 245], 95.00th=[ 247], 00:36:09.357 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 300], 99.95th=[ 300], 00:36:09.357 | 99.99th=[ 300] 00:36:09.357 bw ( KiB/s): min= 256, max= 384, per=4.03%, avg=281.60, stdev=50.70, samples=20 00:36:09.357 iops : min= 64, max= 96, avg=70.40, stdev=12.68, samples=20 00:36:09.357 lat (msec) : 250=95.00%, 500=5.00% 00:36:09.357 cpu : usr=96.31%, sys=2.33%, ctx=155, majf=0, minf=9 00:36:09.357 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:36:09.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.357 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.357 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.357 filename1: (groupid=0, jobs=1): err= 0: pid=1759574: Sun Jul 14 02:23:13 2024 00:36:09.357 read: IOPS=70, BW=281KiB/s (287kB/s)(2816KiB/10036msec) 00:36:09.357 slat (nsec): min=5701, max=69994, avg=31063.94, stdev=10775.91 00:36:09.357 clat (msec): min=122, max=341, avg=227.81, stdev=29.36 00:36:09.357 lat (msec): min=122, max=341, avg=227.84, stdev=29.36 00:36:09.357 clat percentiles (msec): 00:36:09.357 | 1.00th=[ 123], 5.00th=[ 174], 10.00th=[ 203], 20.00th=[ 211], 00:36:09.357 | 30.00th=[ 222], 40.00th=[ 226], 50.00th=[ 230], 60.00th=[ 234], 00:36:09.357 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 247], 95.00th=[ 284], 00:36:09.357 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 342], 99.95th=[ 342], 00:36:09.357 | 99.99th=[ 342] 00:36:09.357 bw ( KiB/s): min= 256, max= 384, per=3.94%, avg=275.20, stdev=44.84, samples=20 00:36:09.357 iops : min= 64, max= 96, avg=68.80, stdev=11.21, samples=20 00:36:09.357 lat (msec) : 250=90.62%, 500=9.38% 00:36:09.357 cpu : usr=98.03%, sys=1.61%, ctx=15, majf=0, minf=9 00:36:09.357 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:36:09.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.357 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.357 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.357 filename1: (groupid=0, jobs=1): err= 0: pid=1759575: Sun Jul 14 02:23:13 2024 00:36:09.357 read: IOPS=97, BW=388KiB/s (398kB/s)(3912KiB/10075msec) 00:36:09.357 slat (nsec): min=7744, max=89649, avg=18265.90, stdev=17361.34 00:36:09.357 clat (msec): min=17, max=298, avg=164.27, stdev=43.86 00:36:09.357 lat (msec): min=17, max=298, avg=164.29, stdev=43.86 00:36:09.357 clat percentiles (msec): 00:36:09.357 | 1.00th=[ 19], 5.00th=[ 122], 10.00th=[ 136], 20.00th=[ 142], 00:36:09.357 | 30.00th=[ 146], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 171], 00:36:09.357 | 70.00th=[ 180], 80.00th=[ 203], 90.00th=[ 224], 95.00th=[ 239], 00:36:09.357 | 99.00th=[ 262], 99.50th=[ 268], 99.90th=[ 300], 99.95th=[ 300], 00:36:09.357 | 99.99th=[ 300] 00:36:09.357 bw ( KiB/s): min= 256, max= 640, per=5.50%, avg=384.80, stdev=81.99, samples=20 00:36:09.357 iops : min= 64, max= 160, avg=96.20, stdev=20.50, samples=20 00:36:09.357 lat (msec) : 20=1.64%, 50=1.64%, 100=1.64%, 250=94.07%, 500=1.02% 00:36:09.357 cpu : usr=97.82%, sys=1.55%, ctx=22, majf=0, minf=9 00:36:09.357 IO depths : 1=1.2%, 2=3.2%, 4=10.9%, 8=72.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:36:09.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.357 complete : 0=0.0%, 4=89.9%, 8=5.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.357 issued rwts: total=978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.357 filename1: (groupid=0, jobs=1): err= 0: pid=1759576: Sun Jul 14 02:23:13 2024 00:36:09.357 read: IOPS=74, BW=299KiB/s (306kB/s)(3008KiB/10057msec) 00:36:09.357 slat (usec): min=6, max=271, avg=43.81, stdev=23.60 00:36:09.357 clat (msec): min=21, max=341, avg=213.60, stdev=52.10 00:36:09.357 lat (msec): min=21, max=341, avg=213.65, stdev=52.10 00:36:09.357 clat percentiles (msec): 00:36:09.357 | 1.00th=[ 22], 5.00th=[ 91], 10.00th=[ 140], 20.00th=[ 199], 00:36:09.357 | 30.00th=[ 218], 40.00th=[ 222], 50.00th=[ 226], 60.00th=[ 232], 00:36:09.357 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 271], 00:36:09.357 | 99.00th=[ 305], 99.50th=[ 317], 99.90th=[ 342], 99.95th=[ 342], 00:36:09.357 | 99.99th=[ 342] 00:36:09.357 bw ( KiB/s): min= 256, max= 512, per=4.21%, avg=294.40, stdev=70.49, samples=20 00:36:09.357 iops : min= 64, max= 128, avg=73.60, stdev=17.62, samples=20 00:36:09.357 lat (msec) : 50=2.39%, 100=3.99%, 250=85.11%, 500=8.51% 00:36:09.357 cpu : usr=97.25%, sys=1.69%, ctx=8, majf=0, minf=9 00:36:09.357 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:36:09.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.357 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.357 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.357 filename1: (groupid=0, jobs=1): err= 0: pid=1759577: Sun Jul 14 02:23:13 2024 00:36:09.358 read: IOPS=70, BW=281KiB/s (287kB/s)(2816KiB/10037msec) 00:36:09.358 slat (nsec): min=9019, max=88837, avg=43934.32, stdev=22451.45 00:36:09.358 clat (msec): min=118, max=360, avg=227.69, stdev=29.57 00:36:09.358 lat (msec): min=118, max=360, avg=227.74, stdev=29.57 00:36:09.358 clat percentiles (msec): 00:36:09.358 | 1.00th=[ 122], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 220], 00:36:09.358 | 30.00th=[ 222], 40.00th=[ 224], 50.00th=[ 228], 60.00th=[ 234], 00:36:09.358 | 70.00th=[ 236], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 271], 00:36:09.358 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 363], 00:36:09.358 | 99.99th=[ 363] 00:36:09.358 bw ( KiB/s): min= 256, max= 384, per=3.94%, avg=275.20, stdev=46.89, samples=20 00:36:09.358 iops : min= 64, max= 96, avg=68.80, stdev=11.72, samples=20 00:36:09.358 lat (msec) : 250=90.91%, 500=9.09% 00:36:09.358 cpu : usr=98.06%, sys=1.49%, ctx=38, majf=0, minf=9 00:36:09.358 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:36:09.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.358 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.358 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.358 filename1: (groupid=0, jobs=1): err= 0: pid=1759578: Sun Jul 14 02:23:13 2024 00:36:09.358 read: IOPS=69, BW=280KiB/s (287kB/s)(2808KiB/10034msec) 00:36:09.358 slat (usec): min=16, max=101, avg=66.81, stdev=15.16 00:36:09.358 clat (msec): min=65, max=367, avg=228.10, stdev=36.43 00:36:09.358 lat (msec): min=65, max=367, avg=228.17, stdev=36.43 00:36:09.358 clat percentiles (msec): 00:36:09.358 | 1.00th=[ 67], 5.00th=[ 180], 10.00th=[ 201], 20.00th=[ 215], 00:36:09.358 | 30.00th=[ 224], 40.00th=[ 226], 50.00th=[ 230], 60.00th=[ 234], 00:36:09.358 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 257], 95.00th=[ 288], 00:36:09.358 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 368], 99.95th=[ 368], 00:36:09.358 | 99.99th=[ 368] 00:36:09.358 bw ( KiB/s): min= 128, max= 384, per=3.93%, avg=274.40, stdev=69.89, samples=20 00:36:09.358 iops : min= 32, max= 96, avg=68.60, stdev=17.47, samples=20 00:36:09.358 lat (msec) : 100=1.99%, 250=85.75%, 500=12.25% 00:36:09.358 cpu : usr=98.17%, sys=1.38%, ctx=12, majf=0, minf=9 00:36:09.358 IO depths : 1=3.1%, 2=9.4%, 4=25.1%, 8=53.1%, 16=9.3%, 32=0.0%, >=64=0.0% 00:36:09.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.358 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.358 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.358 filename1: (groupid=0, jobs=1): err= 0: pid=1759579: Sun Jul 14 02:23:13 2024 00:36:09.358 read: IOPS=71, BW=287KiB/s (294kB/s)(2880KiB/10048msec) 00:36:09.358 slat (usec): min=10, max=103, avg=44.47, stdev=25.93 00:36:09.358 clat (msec): min=117, max=326, avg=222.88, stdev=26.65 00:36:09.358 lat (msec): min=117, max=326, avg=222.93, stdev=26.66 00:36:09.358 clat percentiles (msec): 00:36:09.358 | 1.00th=[ 118], 5.00th=[ 159], 10.00th=[ 190], 20.00th=[ 215], 00:36:09.358 | 30.00th=[ 222], 40.00th=[ 224], 50.00th=[ 226], 60.00th=[ 234], 00:36:09.358 | 70.00th=[ 236], 80.00th=[ 241], 90.00th=[ 243], 95.00th=[ 249], 00:36:09.358 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 326], 99.95th=[ 326], 00:36:09.358 | 99.99th=[ 326] 00:36:09.358 bw ( KiB/s): min= 256, max= 384, per=4.03%, avg=281.60, stdev=52.53, samples=20 00:36:09.358 iops : min= 64, max= 96, avg=70.40, stdev=13.13, samples=20 00:36:09.358 lat (msec) : 250=95.00%, 500=5.00% 00:36:09.358 cpu : usr=98.19%, sys=1.28%, ctx=36, majf=0, minf=9 00:36:09.358 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:09.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.358 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.358 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.358 filename2: (groupid=0, jobs=1): err= 0: pid=1759580: Sun Jul 14 02:23:13 2024 00:36:09.358 read: IOPS=71, BW=287KiB/s (294kB/s)(2880KiB/10040msec) 00:36:09.358 slat (nsec): min=8367, max=54978, avg=22097.51, stdev=9343.71 00:36:09.358 clat (msec): min=122, max=302, avg=222.90, stdev=29.38 00:36:09.358 lat (msec): min=122, max=302, avg=222.93, stdev=29.39 00:36:09.358 clat percentiles (msec): 00:36:09.358 | 1.00th=[ 123], 5.00th=[ 148], 10.00th=[ 180], 20.00th=[ 209], 00:36:09.358 | 30.00th=[ 222], 40.00th=[ 226], 50.00th=[ 228], 60.00th=[ 232], 00:36:09.358 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 251], 00:36:09.358 | 99.00th=[ 279], 99.50th=[ 300], 99.90th=[ 305], 99.95th=[ 305], 00:36:09.358 | 99.99th=[ 305] 00:36:09.358 bw ( KiB/s): min= 256, max= 384, per=4.03%, avg=281.60, stdev=52.53, samples=20 00:36:09.358 iops : min= 64, max= 96, avg=70.40, stdev=13.13, samples=20 00:36:09.358 lat (msec) : 250=94.17%, 500=5.83% 00:36:09.358 cpu : usr=97.99%, sys=1.59%, ctx=20, majf=0, minf=9 00:36:09.358 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:09.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.358 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.358 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.358 filename2: (groupid=0, jobs=1): err= 0: pid=1759581: Sun Jul 14 02:23:13 2024 00:36:09.358 read: IOPS=70, BW=281KiB/s (287kB/s)(2816KiB/10039msec) 00:36:09.358 slat (usec): min=17, max=114, avg=63.24, stdev=16.20 00:36:09.358 clat (msec): min=141, max=327, avg=227.62, stdev=20.00 00:36:09.358 lat (msec): min=141, max=327, avg=227.69, stdev=20.00 00:36:09.358 clat percentiles (msec): 00:36:09.358 | 1.00th=[ 184], 5.00th=[ 199], 10.00th=[ 201], 20.00th=[ 222], 00:36:09.358 | 30.00th=[ 222], 40.00th=[ 226], 50.00th=[ 228], 60.00th=[ 234], 00:36:09.358 | 70.00th=[ 236], 80.00th=[ 241], 90.00th=[ 245], 95.00th=[ 251], 00:36:09.358 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 330], 99.95th=[ 330], 00:36:09.358 | 99.99th=[ 330] 00:36:09.358 bw ( KiB/s): min= 256, max= 384, per=3.94%, avg=275.20, stdev=46.89, samples=20 00:36:09.358 iops : min= 64, max= 96, avg=68.80, stdev=11.72, samples=20 00:36:09.358 lat (msec) : 250=93.89%, 500=6.11% 00:36:09.358 cpu : usr=98.33%, sys=1.26%, ctx=22, majf=0, minf=9 00:36:09.358 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:09.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.358 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.358 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.358 filename2: (groupid=0, jobs=1): err= 0: pid=1759582: Sun Jul 14 02:23:13 2024 00:36:09.358 read: IOPS=83, BW=334KiB/s (343kB/s)(3368KiB/10069msec) 00:36:09.358 slat (usec): min=5, max=252, avg=44.95, stdev=38.04 00:36:09.358 clat (msec): min=19, max=319, avg=190.56, stdev=51.39 00:36:09.358 lat (msec): min=19, max=319, avg=190.60, stdev=51.40 00:36:09.358 clat percentiles (msec): 00:36:09.358 | 1.00th=[ 20], 5.00th=[ 81], 10.00th=[ 120], 20.00th=[ 157], 00:36:09.358 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 203], 60.00th=[ 222], 00:36:09.358 | 70.00th=[ 224], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 245], 00:36:09.358 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 321], 99.95th=[ 321], 00:36:09.358 | 99.99th=[ 321] 00:36:09.358 bw ( KiB/s): min= 256, max= 640, per=4.73%, avg=330.40, stdev=97.57, samples=20 00:36:09.358 iops : min= 64, max= 160, avg=82.60, stdev=24.39, samples=20 00:36:09.358 lat (msec) : 20=1.54%, 50=2.26%, 100=2.61%, 250=92.16%, 500=1.43% 00:36:09.358 cpu : usr=95.68%, sys=2.49%, ctx=136, majf=0, minf=9 00:36:09.358 IO depths : 1=4.2%, 2=9.1%, 4=20.8%, 8=57.5%, 16=8.4%, 32=0.0%, >=64=0.0% 00:36:09.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.358 complete : 0=0.0%, 4=93.0%, 8=2.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.358 issued rwts: total=842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.358 filename2: (groupid=0, jobs=1): err= 0: pid=1759583: Sun Jul 14 02:23:13 2024 00:36:09.358 read: IOPS=69, BW=280KiB/s (286kB/s)(2808KiB/10037msec) 00:36:09.358 slat (usec): min=18, max=210, avg=73.17, stdev=19.67 00:36:09.358 clat (msec): min=66, max=369, avg=228.15, stdev=36.62 00:36:09.358 lat (msec): min=66, max=369, avg=228.22, stdev=36.62 00:36:09.358 clat percentiles (msec): 00:36:09.358 | 1.00th=[ 67], 5.00th=[ 178], 10.00th=[ 201], 20.00th=[ 215], 00:36:09.358 | 30.00th=[ 224], 40.00th=[ 226], 50.00th=[ 230], 60.00th=[ 234], 00:36:09.358 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 257], 95.00th=[ 288], 00:36:09.358 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 372], 99.95th=[ 372], 00:36:09.358 | 99.99th=[ 372] 00:36:09.358 bw ( KiB/s): min= 128, max= 384, per=3.93%, avg=274.40, stdev=69.89, samples=20 00:36:09.358 iops : min= 32, max= 96, avg=68.60, stdev=17.47, samples=20 00:36:09.358 lat (msec) : 100=1.99%, 250=85.75%, 500=12.25% 00:36:09.358 cpu : usr=96.98%, sys=1.88%, ctx=67, majf=0, minf=9 00:36:09.358 IO depths : 1=3.1%, 2=9.4%, 4=25.1%, 8=53.1%, 16=9.3%, 32=0.0%, >=64=0.0% 00:36:09.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.358 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.358 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.358 filename2: (groupid=0, jobs=1): err= 0: pid=1759584: Sun Jul 14 02:23:13 2024 00:36:09.358 read: IOPS=68, BW=274KiB/s (281kB/s)(2752KiB/10027msec) 00:36:09.358 slat (usec): min=16, max=103, avg=63.97, stdev=11.14 00:36:09.358 clat (msec): min=122, max=368, avg=232.61, stdev=28.95 00:36:09.358 lat (msec): min=122, max=369, avg=232.68, stdev=28.95 00:36:09.358 clat percentiles (msec): 00:36:09.358 | 1.00th=[ 176], 5.00th=[ 203], 10.00th=[ 209], 20.00th=[ 220], 00:36:09.358 | 30.00th=[ 222], 40.00th=[ 226], 50.00th=[ 230], 60.00th=[ 234], 00:36:09.358 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 271], 00:36:09.358 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 368], 99.95th=[ 368], 00:36:09.358 | 99.99th=[ 368] 00:36:09.358 bw ( KiB/s): min= 128, max= 384, per=3.84%, avg=268.80, stdev=57.24, samples=20 00:36:09.359 iops : min= 32, max= 96, avg=67.20, stdev=14.31, samples=20 00:36:09.359 lat (msec) : 250=90.99%, 500=9.01% 00:36:09.359 cpu : usr=97.16%, sys=1.81%, ctx=16, majf=0, minf=9 00:36:09.359 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:36:09.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.359 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.359 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.359 filename2: (groupid=0, jobs=1): err= 0: pid=1759585: Sun Jul 14 02:23:13 2024 00:36:09.359 read: IOPS=71, BW=287KiB/s (294kB/s)(2880KiB/10039msec) 00:36:09.359 slat (usec): min=12, max=158, avg=28.65, stdev=23.42 00:36:09.359 clat (msec): min=112, max=341, avg=222.86, stdev=34.78 00:36:09.359 lat (msec): min=112, max=341, avg=222.89, stdev=34.78 00:36:09.359 clat percentiles (msec): 00:36:09.359 | 1.00th=[ 113], 5.00th=[ 157], 10.00th=[ 174], 20.00th=[ 205], 00:36:09.359 | 30.00th=[ 222], 40.00th=[ 224], 50.00th=[ 228], 60.00th=[ 232], 00:36:09.359 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 247], 95.00th=[ 275], 00:36:09.359 | 99.00th=[ 305], 99.50th=[ 317], 99.90th=[ 342], 99.95th=[ 342], 00:36:09.359 | 99.99th=[ 342] 00:36:09.359 bw ( KiB/s): min= 256, max= 384, per=4.03%, avg=281.60, stdev=50.70, samples=20 00:36:09.359 iops : min= 64, max= 96, avg=70.40, stdev=12.68, samples=20 00:36:09.359 lat (msec) : 250=90.83%, 500=9.17% 00:36:09.359 cpu : usr=96.43%, sys=2.24%, ctx=43, majf=0, minf=9 00:36:09.359 IO depths : 1=3.1%, 2=9.3%, 4=25.0%, 8=53.2%, 16=9.4%, 32=0.0%, >=64=0.0% 00:36:09.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.359 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.359 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.359 filename2: (groupid=0, jobs=1): err= 0: pid=1759586: Sun Jul 14 02:23:13 2024 00:36:09.359 read: IOPS=70, BW=281KiB/s (287kB/s)(2816KiB/10034msec) 00:36:09.359 slat (nsec): min=8362, max=72071, avg=13707.96, stdev=8833.10 00:36:09.359 clat (msec): min=67, max=331, avg=227.91, stdev=36.89 00:36:09.359 lat (msec): min=67, max=331, avg=227.93, stdev=36.89 00:36:09.359 clat percentiles (msec): 00:36:09.359 | 1.00th=[ 68], 5.00th=[ 178], 10.00th=[ 197], 20.00th=[ 215], 00:36:09.359 | 30.00th=[ 224], 40.00th=[ 226], 50.00th=[ 228], 60.00th=[ 236], 00:36:09.359 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 257], 95.00th=[ 288], 00:36:09.359 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 330], 99.95th=[ 330], 00:36:09.359 | 99.99th=[ 330] 00:36:09.359 bw ( KiB/s): min= 128, max= 384, per=3.94%, avg=275.20, stdev=73.89, samples=20 00:36:09.359 iops : min= 32, max= 96, avg=68.80, stdev=18.47, samples=20 00:36:09.359 lat (msec) : 100=2.27%, 250=85.80%, 500=11.93% 00:36:09.359 cpu : usr=98.20%, sys=1.37%, ctx=46, majf=0, minf=9 00:36:09.359 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:36:09.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.359 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.359 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.359 filename2: (groupid=0, jobs=1): err= 0: pid=1759587: Sun Jul 14 02:23:13 2024 00:36:09.359 read: IOPS=70, BW=280KiB/s (287kB/s)(2816KiB/10040msec) 00:36:09.359 slat (usec): min=12, max=120, avg=53.68, stdev=22.72 00:36:09.359 clat (msec): min=160, max=293, avg=227.73, stdev=21.79 00:36:09.359 lat (msec): min=160, max=293, avg=227.79, stdev=21.79 00:36:09.359 clat percentiles (msec): 00:36:09.359 | 1.00th=[ 163], 5.00th=[ 184], 10.00th=[ 201], 20.00th=[ 220], 00:36:09.359 | 30.00th=[ 222], 40.00th=[ 226], 50.00th=[ 228], 60.00th=[ 232], 00:36:09.359 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 251], 00:36:09.359 | 99.00th=[ 292], 99.50th=[ 292], 99.90th=[ 292], 99.95th=[ 292], 00:36:09.359 | 99.99th=[ 292] 00:36:09.359 bw ( KiB/s): min= 256, max= 384, per=3.94%, avg=275.20, stdev=42.68, samples=20 00:36:09.359 iops : min= 64, max= 96, avg=68.80, stdev=10.67, samples=20 00:36:09.359 lat (msec) : 250=92.90%, 500=7.10% 00:36:09.359 cpu : usr=97.81%, sys=1.47%, ctx=64, majf=0, minf=9 00:36:09.359 IO depths : 1=4.1%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:36:09.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.359 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.359 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.359 00:36:09.359 Run status group 0 (all jobs): 00:36:09.359 READ: bw=6977KiB/s (7145kB/s), 274KiB/s-388KiB/s (281kB/s-398kB/s), io=68.6MiB (72.0MB), run=10027-10075msec 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.359 bdev_null0 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.359 [2024-07-14 02:23:13.432048] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.359 bdev_null1 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.359 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:09.360 { 00:36:09.360 "params": { 00:36:09.360 "name": "Nvme$subsystem", 00:36:09.360 "trtype": "$TEST_TRANSPORT", 00:36:09.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:09.360 "adrfam": "ipv4", 00:36:09.360 "trsvcid": "$NVMF_PORT", 00:36:09.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:09.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:09.360 "hdgst": ${hdgst:-false}, 00:36:09.360 "ddgst": ${ddgst:-false} 00:36:09.360 }, 00:36:09.360 "method": "bdev_nvme_attach_controller" 00:36:09.360 } 00:36:09.360 EOF 00:36:09.360 )") 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:09.360 { 00:36:09.360 "params": { 00:36:09.360 "name": "Nvme$subsystem", 00:36:09.360 "trtype": "$TEST_TRANSPORT", 00:36:09.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:09.360 "adrfam": "ipv4", 00:36:09.360 "trsvcid": "$NVMF_PORT", 00:36:09.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:09.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:09.360 "hdgst": ${hdgst:-false}, 00:36:09.360 "ddgst": ${ddgst:-false} 00:36:09.360 }, 00:36:09.360 "method": "bdev_nvme_attach_controller" 00:36:09.360 } 00:36:09.360 EOF 00:36:09.360 )") 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:09.360 "params": { 00:36:09.360 "name": "Nvme0", 00:36:09.360 "trtype": "tcp", 00:36:09.360 "traddr": "10.0.0.2", 00:36:09.360 "adrfam": "ipv4", 00:36:09.360 "trsvcid": "4420", 00:36:09.360 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:09.360 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:09.360 "hdgst": false, 00:36:09.360 "ddgst": false 00:36:09.360 }, 00:36:09.360 "method": "bdev_nvme_attach_controller" 00:36:09.360 },{ 00:36:09.360 "params": { 00:36:09.360 "name": "Nvme1", 00:36:09.360 "trtype": "tcp", 00:36:09.360 "traddr": "10.0.0.2", 00:36:09.360 "adrfam": "ipv4", 00:36:09.360 "trsvcid": "4420", 00:36:09.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:09.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:09.360 "hdgst": false, 00:36:09.360 "ddgst": false 00:36:09.360 }, 00:36:09.360 "method": "bdev_nvme_attach_controller" 00:36:09.360 }' 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:09.360 02:23:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.360 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:09.360 ... 00:36:09.360 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:09.360 ... 00:36:09.360 fio-3.35 00:36:09.360 Starting 4 threads 00:36:09.360 EAL: No free 2048 kB hugepages reported on node 1 00:36:14.632 00:36:14.632 filename0: (groupid=0, jobs=1): err= 0: pid=1760976: Sun Jul 14 02:23:19 2024 00:36:14.632 read: IOPS=1870, BW=14.6MiB/s (15.3MB/s)(73.1MiB/5001msec) 00:36:14.632 slat (nsec): min=7380, max=55884, avg=13507.71, stdev=6657.44 00:36:14.632 clat (usec): min=1423, max=7396, avg=4233.79, stdev=823.85 00:36:14.632 lat (usec): min=1439, max=7405, avg=4247.30, stdev=822.75 00:36:14.632 clat percentiles (usec): 00:36:14.632 | 1.00th=[ 1696], 5.00th=[ 3294], 10.00th=[ 3589], 20.00th=[ 3785], 00:36:14.632 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4178], 00:36:14.632 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 5473], 95.00th=[ 6128], 00:36:14.632 | 99.00th=[ 6652], 99.50th=[ 6783], 99.90th=[ 7177], 99.95th=[ 7373], 00:36:14.632 | 99.99th=[ 7373] 00:36:14.632 bw ( KiB/s): min=13952, max=16928, per=25.06%, avg=14954.67, stdev=929.93, samples=9 00:36:14.632 iops : min= 1744, max= 2116, avg=1869.33, stdev=116.24, samples=9 00:36:14.632 lat (msec) : 2=1.83%, 4=37.72%, 10=60.45% 00:36:14.632 cpu : usr=95.22%, sys=4.30%, ctx=11, majf=0, minf=43 00:36:14.632 IO depths : 1=0.1%, 2=2.7%, 4=69.4%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:14.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.632 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.632 issued rwts: total=9355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.632 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:14.632 filename0: (groupid=0, jobs=1): err= 0: pid=1760977: Sun Jul 14 02:23:19 2024 00:36:14.632 read: IOPS=1828, BW=14.3MiB/s (15.0MB/s)(71.5MiB/5004msec) 00:36:14.632 slat (nsec): min=7122, max=51674, avg=11206.83, stdev=4939.62 00:36:14.632 clat (usec): min=2398, max=44928, avg=4339.73, stdev=1422.39 00:36:14.632 lat (usec): min=2406, max=44973, avg=4350.93, stdev=1422.25 00:36:14.632 clat percentiles (usec): 00:36:14.632 | 1.00th=[ 2999], 5.00th=[ 3425], 10.00th=[ 3621], 20.00th=[ 3785], 00:36:14.633 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4228], 00:36:14.633 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 5604], 95.00th=[ 6128], 00:36:14.633 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 7767], 99.95th=[44827], 00:36:14.633 | 99.99th=[44827] 00:36:14.633 bw ( KiB/s): min=13792, max=15552, per=24.52%, avg=14635.20, stdev=654.29, samples=10 00:36:14.633 iops : min= 1724, max= 1944, avg=1829.40, stdev=81.79, samples=10 00:36:14.633 lat (msec) : 4=35.32%, 10=64.59%, 50=0.09% 00:36:14.633 cpu : usr=94.76%, sys=4.78%, ctx=8, majf=0, minf=25 00:36:14.633 IO depths : 1=0.1%, 2=1.0%, 4=69.9%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:14.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.633 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.633 issued rwts: total=9151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.633 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:14.633 filename1: (groupid=0, jobs=1): err= 0: pid=1760978: Sun Jul 14 02:23:19 2024 00:36:14.633 read: IOPS=1851, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5002msec) 00:36:14.633 slat (nsec): min=7085, max=45393, avg=10530.58, stdev=4751.24 00:36:14.633 clat (usec): min=1601, max=48210, avg=4286.87, stdev=1432.27 00:36:14.633 lat (usec): min=1618, max=48239, avg=4297.40, stdev=1432.34 00:36:14.633 clat percentiles (usec): 00:36:14.633 | 1.00th=[ 3032], 5.00th=[ 3523], 10.00th=[ 3687], 20.00th=[ 3851], 00:36:14.633 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4146], 60.00th=[ 4228], 00:36:14.633 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5604], 00:36:14.633 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7570], 99.95th=[47973], 00:36:14.633 | 99.99th=[47973] 00:36:14.633 bw ( KiB/s): min=13867, max=15600, per=24.82%, avg=14812.30, stdev=597.15, samples=10 00:36:14.633 iops : min= 1733, max= 1950, avg=1851.50, stdev=74.71, samples=10 00:36:14.633 lat (msec) : 2=0.08%, 4=33.01%, 10=66.83%, 50=0.09% 00:36:14.633 cpu : usr=94.06%, sys=5.40%, ctx=8, majf=0, minf=60 00:36:14.633 IO depths : 1=0.2%, 2=3.2%, 4=69.4%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:14.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.633 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.633 issued rwts: total=9261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.633 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:14.633 filename1: (groupid=0, jobs=1): err= 0: pid=1760979: Sun Jul 14 02:23:19 2024 00:36:14.633 read: IOPS=1911, BW=14.9MiB/s (15.7MB/s)(74.7MiB/5003msec) 00:36:14.633 slat (nsec): min=7045, max=45324, avg=10739.94, stdev=4779.01 00:36:14.633 clat (usec): min=1485, max=45770, avg=4150.67, stdev=1339.02 00:36:14.633 lat (usec): min=1494, max=45799, avg=4161.41, stdev=1339.14 00:36:14.633 clat percentiles (usec): 00:36:14.633 | 1.00th=[ 2769], 5.00th=[ 3261], 10.00th=[ 3458], 20.00th=[ 3720], 00:36:14.633 | 30.00th=[ 3851], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:36:14.633 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5276], 00:36:14.633 | 99.00th=[ 6063], 99.50th=[ 6390], 99.90th=[ 7832], 99.95th=[45876], 00:36:14.633 | 99.99th=[45876] 00:36:14.633 bw ( KiB/s): min=14291, max=16240, per=25.62%, avg=15289.90, stdev=676.32, samples=10 00:36:14.633 iops : min= 1786, max= 2030, avg=1911.20, stdev=84.60, samples=10 00:36:14.633 lat (msec) : 2=0.02%, 4=40.02%, 10=59.88%, 50=0.08% 00:36:14.633 cpu : usr=93.84%, sys=5.58%, ctx=9, majf=0, minf=38 00:36:14.633 IO depths : 1=0.2%, 2=3.2%, 4=69.4%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:14.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.633 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.633 issued rwts: total=9563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.633 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:14.633 00:36:14.633 Run status group 0 (all jobs): 00:36:14.633 READ: bw=58.3MiB/s (61.1MB/s), 14.3MiB/s-14.9MiB/s (15.0MB/s-15.7MB/s), io=292MiB (306MB), run=5001-5004msec 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.633 00:36:14.633 real 0m24.019s 00:36:14.633 user 4m31.426s 00:36:14.633 sys 0m7.549s 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:14.633 02:23:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.633 ************************************ 00:36:14.633 END TEST fio_dif_rand_params 00:36:14.633 ************************************ 00:36:14.633 02:23:19 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:14.633 02:23:19 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:14.633 02:23:19 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:14.633 02:23:19 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:14.633 02:23:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:14.633 ************************************ 00:36:14.633 START TEST fio_dif_digest 00:36:14.633 ************************************ 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:14.633 bdev_null0 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:14.633 [2024-07-14 02:23:19.892420] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:14.633 02:23:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:14.633 { 00:36:14.633 "params": { 00:36:14.633 "name": "Nvme$subsystem", 00:36:14.633 "trtype": "$TEST_TRANSPORT", 00:36:14.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:14.633 "adrfam": "ipv4", 00:36:14.633 "trsvcid": "$NVMF_PORT", 00:36:14.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:14.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:14.634 "hdgst": ${hdgst:-false}, 00:36:14.634 "ddgst": ${ddgst:-false} 00:36:14.634 }, 00:36:14.634 "method": "bdev_nvme_attach_controller" 00:36:14.634 } 00:36:14.634 EOF 00:36:14.634 )") 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:14.634 "params": { 00:36:14.634 "name": "Nvme0", 00:36:14.634 "trtype": "tcp", 00:36:14.634 "traddr": "10.0.0.2", 00:36:14.634 "adrfam": "ipv4", 00:36:14.634 "trsvcid": "4420", 00:36:14.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:14.634 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:14.634 "hdgst": true, 00:36:14.634 "ddgst": true 00:36:14.634 }, 00:36:14.634 "method": "bdev_nvme_attach_controller" 00:36:14.634 }' 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:14.634 02:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:14.634 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:14.634 ... 00:36:14.634 fio-3.35 00:36:14.634 Starting 3 threads 00:36:14.634 EAL: No free 2048 kB hugepages reported on node 1 00:36:26.908 00:36:26.908 filename0: (groupid=0, jobs=1): err= 0: pid=1761740: Sun Jul 14 02:23:30 2024 00:36:26.908 read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(224MiB/10046msec) 00:36:26.908 slat (nsec): min=5636, max=34192, avg=16050.74, stdev=3445.67 00:36:26.908 clat (usec): min=11872, max=64409, avg=16764.33, stdev=5956.96 00:36:26.908 lat (usec): min=11907, max=64422, avg=16780.38, stdev=5956.99 00:36:26.908 clat percentiles (usec): 00:36:26.908 | 1.00th=[13042], 5.00th=[13960], 10.00th=[14353], 20.00th=[14877], 00:36:26.908 | 30.00th=[15270], 40.00th=[15664], 50.00th=[15926], 60.00th=[16319], 00:36:26.908 | 70.00th=[16581], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:36:26.908 | 99.00th=[57934], 99.50th=[59507], 99.90th=[64226], 99.95th=[64226], 00:36:26.908 | 99.99th=[64226] 00:36:26.908 bw ( KiB/s): min=16128, max=25088, per=28.84%, avg=22924.80, stdev=2321.86, samples=20 00:36:26.908 iops : min= 126, max= 196, avg=179.10, stdev=18.14, samples=20 00:36:26.908 lat (msec) : 20=97.94%, 50=0.17%, 100=1.90% 00:36:26.908 cpu : usr=88.44%, sys=9.06%, ctx=586, majf=0, minf=70 00:36:26.908 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:26.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.908 issued rwts: total=1793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:26.908 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:26.908 filename0: (groupid=0, jobs=1): err= 0: pid=1761741: Sun Jul 14 02:23:30 2024 00:36:26.908 read: IOPS=223, BW=28.0MiB/s (29.3MB/s)(281MiB/10046msec) 00:36:26.908 slat (nsec): min=4422, max=85683, avg=16093.92, stdev=4370.06 00:36:26.908 clat (usec): min=8329, max=54304, avg=13361.34, stdev=1708.28 00:36:26.908 lat (usec): min=8344, max=54318, avg=13377.43, stdev=1708.18 00:36:26.908 clat percentiles (usec): 00:36:26.908 | 1.00th=[ 9241], 5.00th=[10683], 10.00th=[11863], 20.00th=[12518], 00:36:26.908 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:36:26.908 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14746], 95.00th=[15139], 00:36:26.908 | 99.00th=[15926], 99.50th=[16450], 99.90th=[18744], 99.95th=[47449], 00:36:26.908 | 99.99th=[54264] 00:36:26.908 bw ( KiB/s): min=27136, max=30464, per=36.18%, avg=28761.60, stdev=1041.62, samples=20 00:36:26.908 iops : min= 212, max= 238, avg=224.70, stdev= 8.14, samples=20 00:36:26.908 lat (msec) : 10=3.16%, 20=96.75%, 50=0.04%, 100=0.04% 00:36:26.908 cpu : usr=87.64%, sys=9.38%, ctx=697, majf=0, minf=150 00:36:26.908 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:26.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.908 issued rwts: total=2249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:26.908 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:26.908 filename0: (groupid=0, jobs=1): err= 0: pid=1761743: Sun Jul 14 02:23:30 2024 00:36:26.908 read: IOPS=218, BW=27.3MiB/s (28.7MB/s)(275MiB/10046msec) 00:36:26.908 slat (nsec): min=3780, max=30483, avg=14563.51, stdev=1547.63 00:36:26.908 clat (usec): min=8374, max=52521, avg=13679.39, stdev=1677.68 00:36:26.908 lat (usec): min=8388, max=52535, avg=13693.96, stdev=1677.70 00:36:26.908 clat percentiles (usec): 00:36:26.908 | 1.00th=[ 9765], 5.00th=[11207], 10.00th=[12125], 20.00th=[12911], 00:36:26.908 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:36:26.908 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15008], 95.00th=[15401], 00:36:26.908 | 99.00th=[16188], 99.50th=[16581], 99.90th=[21103], 99.95th=[49021], 00:36:26.908 | 99.99th=[52691] 00:36:26.908 bw ( KiB/s): min=27392, max=29952, per=35.35%, avg=28098.90, stdev=747.93, samples=20 00:36:26.908 iops : min= 214, max= 234, avg=219.50, stdev= 5.80, samples=20 00:36:26.908 lat (msec) : 10=1.23%, 20=98.63%, 50=0.09%, 100=0.05% 00:36:26.908 cpu : usr=91.92%, sys=7.42%, ctx=115, majf=0, minf=100 00:36:26.908 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:26.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.908 issued rwts: total=2197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:26.908 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:26.908 00:36:26.908 Run status group 0 (all jobs): 00:36:26.908 READ: bw=77.6MiB/s (81.4MB/s), 22.3MiB/s-28.0MiB/s (23.4MB/s-29.3MB/s), io=780MiB (818MB), run=10046-10046msec 00:36:26.908 02:23:30 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:26.908 02:23:30 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:26.908 02:23:30 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:26.908 02:23:30 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:26.908 02:23:30 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:26.908 02:23:30 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:26.908 02:23:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.908 02:23:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:26.908 02:23:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.908 02:23:30 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:26.908 02:23:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.908 02:23:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:26.908 02:23:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.908 00:36:26.908 real 0m11.086s 00:36:26.908 user 0m27.904s 00:36:26.908 sys 0m2.889s 00:36:26.908 02:23:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:26.908 02:23:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:26.908 ************************************ 00:36:26.908 END TEST fio_dif_digest 00:36:26.908 ************************************ 00:36:26.908 02:23:30 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:26.908 02:23:30 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:26.908 02:23:30 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:26.908 02:23:30 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:26.908 02:23:30 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:26.908 02:23:30 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:26.908 02:23:30 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:26.908 02:23:30 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:26.908 02:23:30 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:26.908 rmmod nvme_tcp 00:36:26.908 rmmod nvme_fabrics 00:36:26.908 rmmod nvme_keyring 00:36:26.908 02:23:31 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:26.908 02:23:31 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:26.908 02:23:31 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:26.908 02:23:31 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1755686 ']' 00:36:26.908 02:23:31 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1755686 00:36:26.908 02:23:31 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1755686 ']' 00:36:26.908 02:23:31 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1755686 00:36:26.908 02:23:31 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:36:26.908 02:23:31 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:26.908 02:23:31 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1755686 00:36:26.908 02:23:31 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:26.908 02:23:31 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:26.908 02:23:31 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1755686' 00:36:26.908 killing process with pid 1755686 00:36:26.908 02:23:31 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1755686 00:36:26.908 02:23:31 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1755686 00:36:26.908 02:23:31 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:26.908 02:23:31 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:26.908 Waiting for block devices as requested 00:36:26.908 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:26.908 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:27.169 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:27.169 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:27.169 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:27.169 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:27.429 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:27.429 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:27.429 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:27.429 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:27.697 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:27.697 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:27.697 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:27.697 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:27.959 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:27.959 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:27.959 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:28.217 02:23:33 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:28.217 02:23:33 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:28.217 02:23:33 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:28.217 02:23:33 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:28.217 02:23:33 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:28.217 02:23:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:28.217 02:23:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:30.122 02:23:35 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:30.122 00:36:30.122 real 1m6.243s 00:36:30.122 user 6m25.866s 00:36:30.122 sys 0m19.674s 00:36:30.122 02:23:35 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:30.122 02:23:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:30.122 ************************************ 00:36:30.122 END TEST nvmf_dif 00:36:30.122 ************************************ 00:36:30.122 02:23:35 -- common/autotest_common.sh@1142 -- # return 0 00:36:30.122 02:23:35 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:30.122 02:23:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:30.122 02:23:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:30.122 02:23:35 -- common/autotest_common.sh@10 -- # set +x 00:36:30.122 ************************************ 00:36:30.122 START TEST nvmf_abort_qd_sizes 00:36:30.122 ************************************ 00:36:30.122 02:23:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:30.381 * Looking for test storage... 00:36:30.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:30.381 02:23:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:32.289 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:32.290 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:32.290 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:32.290 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:32.290 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:32.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:32.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:36:32.290 00:36:32.290 --- 10.0.0.2 ping statistics --- 00:36:32.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:32.290 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:32.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:32.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:36:32.290 00:36:32.290 --- 10.0.0.1 ping statistics --- 00:36:32.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:32.290 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:32.290 02:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:33.668 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:33.668 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:33.668 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:33.668 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:33.668 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:33.668 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:33.668 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:33.668 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:33.668 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:33.668 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:33.668 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:33.668 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:33.668 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:33.668 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:33.668 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:33.668 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:34.603 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:34.603 02:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:34.603 02:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:34.603 02:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:34.603 02:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:34.604 02:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:34.604 02:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:34.604 02:23:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:34.604 02:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:34.604 02:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:34.604 02:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:34.604 02:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1766803 00:36:34.604 02:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:34.604 02:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1766803 00:36:34.604 02:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1766803 ']' 00:36:34.604 02:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.604 02:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:34.604 02:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.604 02:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:34.604 02:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:34.604 [2024-07-14 02:23:40.245113] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:36:34.604 [2024-07-14 02:23:40.245200] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:34.604 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.862 [2024-07-14 02:23:40.313636] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:34.862 [2024-07-14 02:23:40.402709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:34.862 [2024-07-14 02:23:40.402772] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:34.862 [2024-07-14 02:23:40.402785] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:34.862 [2024-07-14 02:23:40.402796] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:34.862 [2024-07-14 02:23:40.402805] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:34.862 [2024-07-14 02:23:40.402901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:34.862 [2024-07-14 02:23:40.402925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:34.862 [2024-07-14 02:23:40.402982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:34.862 [2024-07-14 02:23:40.402985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.862 02:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:34.862 02:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:36:34.862 02:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:34.862 02:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:34.862 02:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:34.862 02:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:34.862 02:23:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:34.863 02:23:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:34.863 02:23:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:34.863 02:23:40 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:34.863 02:23:40 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:34.863 02:23:40 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:34.863 02:23:40 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:34.863 02:23:40 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:34.863 02:23:40 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:35.121 02:23:40 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:35.121 02:23:40 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:35.121 02:23:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:35.121 02:23:40 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:35.121 02:23:40 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:35.121 02:23:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:35.121 02:23:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:35.121 02:23:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:35.121 02:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:35.121 02:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:35.121 02:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:35.121 ************************************ 00:36:35.121 START TEST spdk_target_abort 00:36:35.121 ************************************ 00:36:35.121 02:23:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:36:35.121 02:23:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:35.121 02:23:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:35.121 02:23:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.121 02:23:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:38.417 spdk_targetn1 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:38.417 [2024-07-14 02:23:43.428787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:38.417 [2024-07-14 02:23:43.461063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:38.417 02:23:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:38.417 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.949 Initializing NVMe Controllers 00:36:40.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:40.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:40.949 Initialization complete. Launching workers. 00:36:40.949 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9774, failed: 0 00:36:40.949 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1255, failed to submit 8519 00:36:40.949 success 828, unsuccess 427, failed 0 00:36:40.950 02:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:40.950 02:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:41.207 EAL: No free 2048 kB hugepages reported on node 1 00:36:44.525 Initializing NVMe Controllers 00:36:44.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:44.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:44.525 Initialization complete. Launching workers. 00:36:44.525 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8682, failed: 0 00:36:44.525 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1253, failed to submit 7429 00:36:44.525 success 342, unsuccess 911, failed 0 00:36:44.525 02:23:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:44.525 02:23:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:44.525 EAL: No free 2048 kB hugepages reported on node 1 00:36:47.824 Initializing NVMe Controllers 00:36:47.824 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:47.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:47.824 Initialization complete. Launching workers. 00:36:47.824 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31499, failed: 0 00:36:47.824 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2765, failed to submit 28734 00:36:47.824 success 547, unsuccess 2218, failed 0 00:36:47.824 02:23:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:47.824 02:23:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.824 02:23:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:47.824 02:23:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.824 02:23:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:47.824 02:23:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.824 02:23:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:49.198 02:23:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.198 02:23:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1766803 00:36:49.198 02:23:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1766803 ']' 00:36:49.198 02:23:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1766803 00:36:49.198 02:23:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:36:49.198 02:23:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:49.198 02:23:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1766803 00:36:49.198 02:23:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:49.198 02:23:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:49.198 02:23:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1766803' 00:36:49.198 killing process with pid 1766803 00:36:49.198 02:23:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1766803 00:36:49.198 02:23:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1766803 00:36:49.198 00:36:49.198 real 0m14.290s 00:36:49.198 user 0m54.172s 00:36:49.198 sys 0m2.535s 00:36:49.198 02:23:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:49.198 02:23:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:49.198 ************************************ 00:36:49.198 END TEST spdk_target_abort 00:36:49.198 ************************************ 00:36:49.455 02:23:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:49.455 02:23:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:49.455 02:23:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:49.455 02:23:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:49.455 02:23:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:49.455 ************************************ 00:36:49.455 START TEST kernel_target_abort 00:36:49.455 ************************************ 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:49.455 02:23:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:50.389 Waiting for block devices as requested 00:36:50.647 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:50.647 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:50.905 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:50.905 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:50.905 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:50.905 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:51.164 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:51.164 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:51.164 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:51.164 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:51.422 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:51.422 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:51.422 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:51.422 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:51.682 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:51.682 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:51.682 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:51.941 No valid GPT data, bailing 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:51.941 00:36:51.941 Discovery Log Number of Records 2, Generation counter 2 00:36:51.941 =====Discovery Log Entry 0====== 00:36:51.941 trtype: tcp 00:36:51.941 adrfam: ipv4 00:36:51.941 subtype: current discovery subsystem 00:36:51.941 treq: not specified, sq flow control disable supported 00:36:51.941 portid: 1 00:36:51.941 trsvcid: 4420 00:36:51.941 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:51.941 traddr: 10.0.0.1 00:36:51.941 eflags: none 00:36:51.941 sectype: none 00:36:51.941 =====Discovery Log Entry 1====== 00:36:51.941 trtype: tcp 00:36:51.941 adrfam: ipv4 00:36:51.941 subtype: nvme subsystem 00:36:51.941 treq: not specified, sq flow control disable supported 00:36:51.941 portid: 1 00:36:51.941 trsvcid: 4420 00:36:51.941 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:51.941 traddr: 10.0.0.1 00:36:51.941 eflags: none 00:36:51.941 sectype: none 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:51.941 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:51.942 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:51.942 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:51.942 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:51.942 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:51.942 02:23:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:52.199 EAL: No free 2048 kB hugepages reported on node 1 00:36:55.489 Initializing NVMe Controllers 00:36:55.489 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:55.489 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:55.489 Initialization complete. Launching workers. 00:36:55.489 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27942, failed: 0 00:36:55.489 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27942, failed to submit 0 00:36:55.489 success 0, unsuccess 27942, failed 0 00:36:55.489 02:24:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:55.489 02:24:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:55.489 EAL: No free 2048 kB hugepages reported on node 1 00:36:58.776 Initializing NVMe Controllers 00:36:58.776 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:58.776 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:58.776 Initialization complete. Launching workers. 00:36:58.776 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56451, failed: 0 00:36:58.776 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14214, failed to submit 42237 00:36:58.776 success 0, unsuccess 14214, failed 0 00:36:58.776 02:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:58.776 02:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:58.776 EAL: No free 2048 kB hugepages reported on node 1 00:37:01.306 Initializing NVMe Controllers 00:37:01.306 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:01.306 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:01.306 Initialization complete. Launching workers. 00:37:01.306 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54943, failed: 0 00:37:01.306 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13710, failed to submit 41233 00:37:01.306 success 0, unsuccess 13710, failed 0 00:37:01.306 02:24:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:01.306 02:24:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:01.306 02:24:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:37:01.306 02:24:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:01.306 02:24:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:01.306 02:24:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:01.306 02:24:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:01.306 02:24:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:01.306 02:24:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:01.564 02:24:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:02.501 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:02.501 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:02.501 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:02.501 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:02.501 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:02.501 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:02.502 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:02.502 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:02.502 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:02.502 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:02.502 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:02.502 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:02.502 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:02.502 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:02.502 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:02.502 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:03.438 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:37:03.726 00:37:03.726 real 0m14.290s 00:37:03.726 user 0m4.603s 00:37:03.726 sys 0m3.422s 00:37:03.726 02:24:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:03.726 02:24:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.726 ************************************ 00:37:03.726 END TEST kernel_target_abort 00:37:03.726 ************************************ 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:03.726 rmmod nvme_tcp 00:37:03.726 rmmod nvme_fabrics 00:37:03.726 rmmod nvme_keyring 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1766803 ']' 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1766803 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1766803 ']' 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1766803 00:37:03.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1766803) - No such process 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1766803 is not found' 00:37:03.726 Process with pid 1766803 is not found 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:03.726 02:24:09 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:04.663 Waiting for block devices as requested 00:37:04.663 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:37:04.921 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:04.921 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:05.178 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:05.178 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:05.178 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:05.178 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:05.438 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:05.438 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:05.438 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:05.438 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:05.438 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:05.720 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:05.720 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:05.720 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:05.978 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:05.978 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:05.978 02:24:11 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:05.978 02:24:11 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:05.978 02:24:11 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:05.978 02:24:11 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:05.978 02:24:11 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:05.978 02:24:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:05.978 02:24:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:08.509 02:24:13 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:08.509 00:37:08.509 real 0m37.878s 00:37:08.509 user 1m0.834s 00:37:08.509 sys 0m9.213s 00:37:08.509 02:24:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:08.509 02:24:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:08.509 ************************************ 00:37:08.509 END TEST nvmf_abort_qd_sizes 00:37:08.509 ************************************ 00:37:08.509 02:24:13 -- common/autotest_common.sh@1142 -- # return 0 00:37:08.509 02:24:13 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:08.509 02:24:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:08.509 02:24:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:08.509 02:24:13 -- common/autotest_common.sh@10 -- # set +x 00:37:08.509 ************************************ 00:37:08.509 START TEST keyring_file 00:37:08.509 ************************************ 00:37:08.509 02:24:13 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:08.509 * Looking for test storage... 00:37:08.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:08.509 02:24:13 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:08.509 02:24:13 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:08.509 02:24:13 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:08.509 02:24:13 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:08.509 02:24:13 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.509 02:24:13 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.509 02:24:13 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.509 02:24:13 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:08.509 02:24:13 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:08.509 02:24:13 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:08.509 02:24:13 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:08.509 02:24:13 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:08.509 02:24:13 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:08.509 02:24:13 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:08.509 02:24:13 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CHfE6TRpUi 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CHfE6TRpUi 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CHfE6TRpUi 00:37:08.509 02:24:13 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.CHfE6TRpUi 00:37:08.509 02:24:13 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gEdkEo966s 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:08.509 02:24:13 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gEdkEo966s 00:37:08.509 02:24:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gEdkEo966s 00:37:08.509 02:24:13 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.gEdkEo966s 00:37:08.509 02:24:13 keyring_file -- keyring/file.sh@30 -- # tgtpid=1773179 00:37:08.510 02:24:13 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:08.510 02:24:13 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1773179 00:37:08.510 02:24:13 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1773179 ']' 00:37:08.510 02:24:13 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:08.510 02:24:13 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:08.510 02:24:13 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:08.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:08.510 02:24:13 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:08.510 02:24:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:08.510 [2024-07-14 02:24:13.929027] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:08.510 [2024-07-14 02:24:13.929115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773179 ] 00:37:08.510 EAL: No free 2048 kB hugepages reported on node 1 00:37:08.510 [2024-07-14 02:24:13.988693] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:08.510 [2024-07-14 02:24:14.078951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:08.767 02:24:14 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:08.767 02:24:14 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:08.767 02:24:14 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:08.767 02:24:14 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.767 02:24:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:08.767 [2024-07-14 02:24:14.343952] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:08.767 null0 00:37:08.767 [2024-07-14 02:24:14.376012] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:08.767 [2024-07-14 02:24:14.376491] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:08.767 [2024-07-14 02:24:14.384027] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:08.768 02:24:14 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:08.768 [2024-07-14 02:24:14.396024] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:08.768 request: 00:37:08.768 { 00:37:08.768 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:08.768 "secure_channel": false, 00:37:08.768 "listen_address": { 00:37:08.768 "trtype": "tcp", 00:37:08.768 "traddr": "127.0.0.1", 00:37:08.768 "trsvcid": "4420" 00:37:08.768 }, 00:37:08.768 "method": "nvmf_subsystem_add_listener", 00:37:08.768 "req_id": 1 00:37:08.768 } 00:37:08.768 Got JSON-RPC error response 00:37:08.768 response: 00:37:08.768 { 00:37:08.768 "code": -32602, 00:37:08.768 "message": "Invalid parameters" 00:37:08.768 } 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:08.768 02:24:14 keyring_file -- keyring/file.sh@46 -- # bperfpid=1773192 00:37:08.768 02:24:14 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1773192 /var/tmp/bperf.sock 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1773192 ']' 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:08.768 02:24:14 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:08.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:08.768 02:24:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:08.768 [2024-07-14 02:24:14.446819] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:08.768 [2024-07-14 02:24:14.446911] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773192 ] 00:37:09.026 EAL: No free 2048 kB hugepages reported on node 1 00:37:09.026 [2024-07-14 02:24:14.507030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:09.026 [2024-07-14 02:24:14.592284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:09.026 02:24:14 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:09.026 02:24:14 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:09.026 02:24:14 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CHfE6TRpUi 00:37:09.026 02:24:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CHfE6TRpUi 00:37:09.284 02:24:14 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.gEdkEo966s 00:37:09.285 02:24:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.gEdkEo966s 00:37:09.543 02:24:15 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:09.543 02:24:15 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:09.543 02:24:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:09.543 02:24:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:09.543 02:24:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:09.801 02:24:15 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.CHfE6TRpUi == \/\t\m\p\/\t\m\p\.\C\H\f\E\6\T\R\p\U\i ]] 00:37:09.801 02:24:15 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:09.801 02:24:15 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:09.801 02:24:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:09.801 02:24:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:09.801 02:24:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:10.059 02:24:15 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.gEdkEo966s == \/\t\m\p\/\t\m\p\.\g\E\d\k\E\o\9\6\6\s ]] 00:37:10.059 02:24:15 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:10.059 02:24:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:10.059 02:24:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:10.059 02:24:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.059 02:24:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.059 02:24:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:10.318 02:24:15 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:10.318 02:24:15 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:10.318 02:24:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:10.318 02:24:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:10.318 02:24:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.318 02:24:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.318 02:24:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:10.576 02:24:16 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:10.576 02:24:16 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:10.576 02:24:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:10.835 [2024-07-14 02:24:16.414861] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:10.835 nvme0n1 00:37:10.835 02:24:16 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:10.835 02:24:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:10.835 02:24:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:10.835 02:24:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.835 02:24:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.835 02:24:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:11.093 02:24:16 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:11.093 02:24:16 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:11.093 02:24:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:11.093 02:24:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:11.094 02:24:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:11.094 02:24:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:11.094 02:24:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:11.352 02:24:16 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:11.352 02:24:16 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:11.610 Running I/O for 1 seconds... 00:37:12.547 00:37:12.547 Latency(us) 00:37:12.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:12.547 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:12.547 nvme0n1 : 1.03 4107.49 16.04 0.00 0.00 30674.45 5534.15 40001.23 00:37:12.547 =================================================================================================================== 00:37:12.547 Total : 4107.49 16.04 0.00 0.00 30674.45 5534.15 40001.23 00:37:12.547 0 00:37:12.547 02:24:18 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:12.547 02:24:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:12.806 02:24:18 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:12.806 02:24:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:12.806 02:24:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:12.806 02:24:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.806 02:24:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.806 02:24:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:13.064 02:24:18 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:13.064 02:24:18 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:13.064 02:24:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:13.064 02:24:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.064 02:24:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.064 02:24:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.064 02:24:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:13.322 02:24:18 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:13.322 02:24:18 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:13.322 02:24:18 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:13.322 02:24:18 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:13.322 02:24:18 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:13.322 02:24:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:13.322 02:24:18 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:13.322 02:24:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:13.322 02:24:18 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:13.322 02:24:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:13.581 [2024-07-14 02:24:19.151694] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:13.581 [2024-07-14 02:24:19.152166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b188f0 (107): Transport endpoint is not connected 00:37:13.581 [2024-07-14 02:24:19.153164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b188f0 (9): Bad file descriptor 00:37:13.581 [2024-07-14 02:24:19.154179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:13.581 [2024-07-14 02:24:19.154208] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:13.581 [2024-07-14 02:24:19.154233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:13.581 request: 00:37:13.581 { 00:37:13.581 "name": "nvme0", 00:37:13.581 "trtype": "tcp", 00:37:13.581 "traddr": "127.0.0.1", 00:37:13.581 "adrfam": "ipv4", 00:37:13.581 "trsvcid": "4420", 00:37:13.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:13.581 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:13.581 "prchk_reftag": false, 00:37:13.581 "prchk_guard": false, 00:37:13.581 "hdgst": false, 00:37:13.581 "ddgst": false, 00:37:13.581 "psk": "key1", 00:37:13.581 "method": "bdev_nvme_attach_controller", 00:37:13.581 "req_id": 1 00:37:13.581 } 00:37:13.581 Got JSON-RPC error response 00:37:13.581 response: 00:37:13.581 { 00:37:13.581 "code": -5, 00:37:13.581 "message": "Input/output error" 00:37:13.581 } 00:37:13.581 02:24:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:13.581 02:24:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:13.581 02:24:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:13.581 02:24:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:13.581 02:24:19 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:13.581 02:24:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:13.581 02:24:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.581 02:24:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.581 02:24:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.581 02:24:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:13.839 02:24:19 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:13.839 02:24:19 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:13.839 02:24:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:13.839 02:24:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.839 02:24:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.839 02:24:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.839 02:24:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:14.097 02:24:19 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:14.097 02:24:19 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:14.097 02:24:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:14.355 02:24:19 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:14.355 02:24:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:14.614 02:24:20 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:14.614 02:24:20 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:14.614 02:24:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:14.873 02:24:20 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:14.873 02:24:20 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.CHfE6TRpUi 00:37:14.873 02:24:20 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.CHfE6TRpUi 00:37:14.873 02:24:20 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:14.873 02:24:20 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.CHfE6TRpUi 00:37:14.873 02:24:20 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:14.873 02:24:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:14.873 02:24:20 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:14.873 02:24:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:14.873 02:24:20 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CHfE6TRpUi 00:37:14.873 02:24:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CHfE6TRpUi 00:37:15.132 [2024-07-14 02:24:20.648028] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CHfE6TRpUi': 0100660 00:37:15.132 [2024-07-14 02:24:20.648065] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:15.132 request: 00:37:15.132 { 00:37:15.132 "name": "key0", 00:37:15.132 "path": "/tmp/tmp.CHfE6TRpUi", 00:37:15.132 "method": "keyring_file_add_key", 00:37:15.132 "req_id": 1 00:37:15.132 } 00:37:15.132 Got JSON-RPC error response 00:37:15.132 response: 00:37:15.132 { 00:37:15.132 "code": -1, 00:37:15.132 "message": "Operation not permitted" 00:37:15.132 } 00:37:15.132 02:24:20 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:15.132 02:24:20 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:15.132 02:24:20 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:15.132 02:24:20 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:15.132 02:24:20 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.CHfE6TRpUi 00:37:15.132 02:24:20 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CHfE6TRpUi 00:37:15.132 02:24:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CHfE6TRpUi 00:37:15.390 02:24:20 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.CHfE6TRpUi 00:37:15.390 02:24:20 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:15.390 02:24:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:15.390 02:24:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:15.390 02:24:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:15.390 02:24:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:15.390 02:24:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.654 02:24:21 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:15.654 02:24:21 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:15.654 02:24:21 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:15.654 02:24:21 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:15.654 02:24:21 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:15.654 02:24:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.655 02:24:21 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:15.655 02:24:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.655 02:24:21 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:15.655 02:24:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:15.914 [2024-07-14 02:24:21.390054] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.CHfE6TRpUi': No such file or directory 00:37:15.914 [2024-07-14 02:24:21.390088] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:15.914 [2024-07-14 02:24:21.390124] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:15.914 [2024-07-14 02:24:21.390145] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:15.914 [2024-07-14 02:24:21.390163] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:15.914 request: 00:37:15.914 { 00:37:15.914 "name": "nvme0", 00:37:15.914 "trtype": "tcp", 00:37:15.914 "traddr": "127.0.0.1", 00:37:15.914 "adrfam": "ipv4", 00:37:15.914 "trsvcid": "4420", 00:37:15.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:15.914 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:15.914 "prchk_reftag": false, 00:37:15.914 "prchk_guard": false, 00:37:15.914 "hdgst": false, 00:37:15.914 "ddgst": false, 00:37:15.914 "psk": "key0", 00:37:15.914 "method": "bdev_nvme_attach_controller", 00:37:15.914 "req_id": 1 00:37:15.914 } 00:37:15.914 Got JSON-RPC error response 00:37:15.914 response: 00:37:15.914 { 00:37:15.914 "code": -19, 00:37:15.914 "message": "No such device" 00:37:15.914 } 00:37:15.914 02:24:21 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:15.914 02:24:21 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:15.914 02:24:21 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:15.914 02:24:21 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:15.914 02:24:21 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:15.914 02:24:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:16.171 02:24:21 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:16.171 02:24:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:16.171 02:24:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:16.171 02:24:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:16.171 02:24:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:16.171 02:24:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:16.171 02:24:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AKebcJcraG 00:37:16.171 02:24:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:16.171 02:24:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:16.171 02:24:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:16.171 02:24:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:16.171 02:24:21 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:16.171 02:24:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:16.171 02:24:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:16.171 02:24:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AKebcJcraG 00:37:16.171 02:24:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AKebcJcraG 00:37:16.171 02:24:21 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.AKebcJcraG 00:37:16.171 02:24:21 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AKebcJcraG 00:37:16.171 02:24:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AKebcJcraG 00:37:16.429 02:24:21 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:16.429 02:24:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:16.686 nvme0n1 00:37:16.686 02:24:22 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:16.686 02:24:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:16.686 02:24:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:16.686 02:24:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:16.686 02:24:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.686 02:24:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:16.944 02:24:22 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:16.944 02:24:22 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:16.944 02:24:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:17.201 02:24:22 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:17.201 02:24:22 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:17.201 02:24:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:17.201 02:24:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:17.201 02:24:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.492 02:24:23 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:17.492 02:24:23 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:17.492 02:24:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:17.492 02:24:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:17.492 02:24:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:17.492 02:24:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:17.492 02:24:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.750 02:24:23 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:17.750 02:24:23 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:17.750 02:24:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:18.008 02:24:23 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:18.008 02:24:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.008 02:24:23 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:18.267 02:24:23 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:18.267 02:24:23 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AKebcJcraG 00:37:18.267 02:24:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AKebcJcraG 00:37:18.525 02:24:24 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.gEdkEo966s 00:37:18.525 02:24:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.gEdkEo966s 00:37:18.783 02:24:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:18.783 02:24:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:19.041 nvme0n1 00:37:19.041 02:24:24 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:19.041 02:24:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:19.300 02:24:24 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:19.300 "subsystems": [ 00:37:19.300 { 00:37:19.300 "subsystem": "keyring", 00:37:19.300 "config": [ 00:37:19.300 { 00:37:19.300 "method": "keyring_file_add_key", 00:37:19.300 "params": { 00:37:19.300 "name": "key0", 00:37:19.300 "path": "/tmp/tmp.AKebcJcraG" 00:37:19.300 } 00:37:19.300 }, 00:37:19.300 { 00:37:19.300 "method": "keyring_file_add_key", 00:37:19.300 "params": { 00:37:19.300 "name": "key1", 00:37:19.301 "path": "/tmp/tmp.gEdkEo966s" 00:37:19.301 } 00:37:19.301 } 00:37:19.301 ] 00:37:19.301 }, 00:37:19.301 { 00:37:19.301 "subsystem": "iobuf", 00:37:19.301 "config": [ 00:37:19.301 { 00:37:19.301 "method": "iobuf_set_options", 00:37:19.301 "params": { 00:37:19.301 "small_pool_count": 8192, 00:37:19.301 "large_pool_count": 1024, 00:37:19.301 "small_bufsize": 8192, 00:37:19.301 "large_bufsize": 135168 00:37:19.301 } 00:37:19.301 } 00:37:19.301 ] 00:37:19.301 }, 00:37:19.301 { 00:37:19.301 "subsystem": "sock", 00:37:19.301 "config": [ 00:37:19.301 { 00:37:19.301 "method": "sock_set_default_impl", 00:37:19.301 "params": { 00:37:19.301 "impl_name": "posix" 00:37:19.301 } 00:37:19.301 }, 00:37:19.301 { 00:37:19.301 "method": "sock_impl_set_options", 00:37:19.301 "params": { 00:37:19.301 "impl_name": "ssl", 00:37:19.301 "recv_buf_size": 4096, 00:37:19.301 "send_buf_size": 4096, 00:37:19.301 "enable_recv_pipe": true, 00:37:19.301 "enable_quickack": false, 00:37:19.301 "enable_placement_id": 0, 00:37:19.301 "enable_zerocopy_send_server": true, 00:37:19.301 "enable_zerocopy_send_client": false, 00:37:19.301 "zerocopy_threshold": 0, 00:37:19.301 "tls_version": 0, 00:37:19.301 "enable_ktls": false 00:37:19.301 } 00:37:19.301 }, 00:37:19.301 { 00:37:19.301 "method": "sock_impl_set_options", 00:37:19.301 "params": { 00:37:19.301 "impl_name": "posix", 00:37:19.301 "recv_buf_size": 2097152, 00:37:19.301 "send_buf_size": 2097152, 00:37:19.301 "enable_recv_pipe": true, 00:37:19.301 "enable_quickack": false, 00:37:19.301 "enable_placement_id": 0, 00:37:19.301 "enable_zerocopy_send_server": true, 00:37:19.301 "enable_zerocopy_send_client": false, 00:37:19.301 "zerocopy_threshold": 0, 00:37:19.301 "tls_version": 0, 00:37:19.301 "enable_ktls": false 00:37:19.301 } 00:37:19.301 } 00:37:19.301 ] 00:37:19.301 }, 00:37:19.301 { 00:37:19.301 "subsystem": "vmd", 00:37:19.301 "config": [] 00:37:19.301 }, 00:37:19.301 { 00:37:19.301 "subsystem": "accel", 00:37:19.301 "config": [ 00:37:19.301 { 00:37:19.301 "method": "accel_set_options", 00:37:19.301 "params": { 00:37:19.301 "small_cache_size": 128, 00:37:19.301 "large_cache_size": 16, 00:37:19.301 "task_count": 2048, 00:37:19.301 "sequence_count": 2048, 00:37:19.301 "buf_count": 2048 00:37:19.301 } 00:37:19.301 } 00:37:19.301 ] 00:37:19.301 }, 00:37:19.301 { 00:37:19.301 "subsystem": "bdev", 00:37:19.301 "config": [ 00:37:19.301 { 00:37:19.301 "method": "bdev_set_options", 00:37:19.301 "params": { 00:37:19.301 "bdev_io_pool_size": 65535, 00:37:19.301 "bdev_io_cache_size": 256, 00:37:19.301 "bdev_auto_examine": true, 00:37:19.301 "iobuf_small_cache_size": 128, 00:37:19.301 "iobuf_large_cache_size": 16 00:37:19.301 } 00:37:19.301 }, 00:37:19.301 { 00:37:19.301 "method": "bdev_raid_set_options", 00:37:19.301 "params": { 00:37:19.301 "process_window_size_kb": 1024 00:37:19.301 } 00:37:19.301 }, 00:37:19.301 { 00:37:19.301 "method": "bdev_iscsi_set_options", 00:37:19.301 "params": { 00:37:19.301 "timeout_sec": 30 00:37:19.301 } 00:37:19.301 }, 00:37:19.301 { 00:37:19.301 "method": "bdev_nvme_set_options", 00:37:19.301 "params": { 00:37:19.301 "action_on_timeout": "none", 00:37:19.301 "timeout_us": 0, 00:37:19.301 "timeout_admin_us": 0, 00:37:19.301 "keep_alive_timeout_ms": 10000, 00:37:19.301 "arbitration_burst": 0, 00:37:19.301 "low_priority_weight": 0, 00:37:19.301 "medium_priority_weight": 0, 00:37:19.301 "high_priority_weight": 0, 00:37:19.301 "nvme_adminq_poll_period_us": 10000, 00:37:19.301 "nvme_ioq_poll_period_us": 0, 00:37:19.301 "io_queue_requests": 512, 00:37:19.301 "delay_cmd_submit": true, 00:37:19.301 "transport_retry_count": 4, 00:37:19.301 "bdev_retry_count": 3, 00:37:19.301 "transport_ack_timeout": 0, 00:37:19.301 "ctrlr_loss_timeout_sec": 0, 00:37:19.301 "reconnect_delay_sec": 0, 00:37:19.301 "fast_io_fail_timeout_sec": 0, 00:37:19.301 "disable_auto_failback": false, 00:37:19.301 "generate_uuids": false, 00:37:19.301 "transport_tos": 0, 00:37:19.301 "nvme_error_stat": false, 00:37:19.301 "rdma_srq_size": 0, 00:37:19.301 "io_path_stat": false, 00:37:19.301 "allow_accel_sequence": false, 00:37:19.301 "rdma_max_cq_size": 0, 00:37:19.301 "rdma_cm_event_timeout_ms": 0, 00:37:19.301 "dhchap_digests": [ 00:37:19.301 "sha256", 00:37:19.301 "sha384", 00:37:19.301 "sha512" 00:37:19.301 ], 00:37:19.301 "dhchap_dhgroups": [ 00:37:19.301 "null", 00:37:19.301 "ffdhe2048", 00:37:19.301 "ffdhe3072", 00:37:19.301 "ffdhe4096", 00:37:19.301 "ffdhe6144", 00:37:19.301 "ffdhe8192" 00:37:19.301 ] 00:37:19.301 } 00:37:19.301 }, 00:37:19.301 { 00:37:19.301 "method": "bdev_nvme_attach_controller", 00:37:19.301 "params": { 00:37:19.301 "name": "nvme0", 00:37:19.301 "trtype": "TCP", 00:37:19.301 "adrfam": "IPv4", 00:37:19.301 "traddr": "127.0.0.1", 00:37:19.301 "trsvcid": "4420", 00:37:19.301 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:19.301 "prchk_reftag": false, 00:37:19.301 "prchk_guard": false, 00:37:19.301 "ctrlr_loss_timeout_sec": 0, 00:37:19.301 "reconnect_delay_sec": 0, 00:37:19.301 "fast_io_fail_timeout_sec": 0, 00:37:19.301 "psk": "key0", 00:37:19.301 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:19.301 "hdgst": false, 00:37:19.301 "ddgst": false 00:37:19.301 } 00:37:19.301 }, 00:37:19.301 { 00:37:19.301 "method": "bdev_nvme_set_hotplug", 00:37:19.301 "params": { 00:37:19.301 "period_us": 100000, 00:37:19.301 "enable": false 00:37:19.301 } 00:37:19.301 }, 00:37:19.301 { 00:37:19.302 "method": "bdev_wait_for_examine" 00:37:19.302 } 00:37:19.302 ] 00:37:19.302 }, 00:37:19.302 { 00:37:19.302 "subsystem": "nbd", 00:37:19.302 "config": [] 00:37:19.302 } 00:37:19.302 ] 00:37:19.302 }' 00:37:19.302 02:24:24 keyring_file -- keyring/file.sh@114 -- # killprocess 1773192 00:37:19.302 02:24:24 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1773192 ']' 00:37:19.302 02:24:24 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1773192 00:37:19.302 02:24:24 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:19.302 02:24:24 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:19.302 02:24:24 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1773192 00:37:19.302 02:24:24 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:19.302 02:24:24 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:19.302 02:24:24 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1773192' 00:37:19.302 killing process with pid 1773192 00:37:19.302 02:24:24 keyring_file -- common/autotest_common.sh@967 -- # kill 1773192 00:37:19.302 Received shutdown signal, test time was about 1.000000 seconds 00:37:19.302 00:37:19.302 Latency(us) 00:37:19.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:19.302 =================================================================================================================== 00:37:19.302 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:19.302 02:24:24 keyring_file -- common/autotest_common.sh@972 -- # wait 1773192 00:37:19.561 02:24:25 keyring_file -- keyring/file.sh@117 -- # bperfpid=1774641 00:37:19.561 02:24:25 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1774641 /var/tmp/bperf.sock 00:37:19.561 02:24:25 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1774641 ']' 00:37:19.561 02:24:25 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:19.561 02:24:25 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:19.561 02:24:25 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:19.561 02:24:25 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:19.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:19.561 02:24:25 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:19.561 "subsystems": [ 00:37:19.561 { 00:37:19.561 "subsystem": "keyring", 00:37:19.561 "config": [ 00:37:19.561 { 00:37:19.561 "method": "keyring_file_add_key", 00:37:19.561 "params": { 00:37:19.561 "name": "key0", 00:37:19.561 "path": "/tmp/tmp.AKebcJcraG" 00:37:19.561 } 00:37:19.561 }, 00:37:19.561 { 00:37:19.561 "method": "keyring_file_add_key", 00:37:19.561 "params": { 00:37:19.561 "name": "key1", 00:37:19.561 "path": "/tmp/tmp.gEdkEo966s" 00:37:19.561 } 00:37:19.561 } 00:37:19.561 ] 00:37:19.561 }, 00:37:19.561 { 00:37:19.561 "subsystem": "iobuf", 00:37:19.561 "config": [ 00:37:19.561 { 00:37:19.561 "method": "iobuf_set_options", 00:37:19.561 "params": { 00:37:19.561 "small_pool_count": 8192, 00:37:19.561 "large_pool_count": 1024, 00:37:19.561 "small_bufsize": 8192, 00:37:19.561 "large_bufsize": 135168 00:37:19.561 } 00:37:19.561 } 00:37:19.561 ] 00:37:19.561 }, 00:37:19.561 { 00:37:19.561 "subsystem": "sock", 00:37:19.561 "config": [ 00:37:19.561 { 00:37:19.561 "method": "sock_set_default_impl", 00:37:19.561 "params": { 00:37:19.561 "impl_name": "posix" 00:37:19.561 } 00:37:19.561 }, 00:37:19.561 { 00:37:19.561 "method": "sock_impl_set_options", 00:37:19.561 "params": { 00:37:19.561 "impl_name": "ssl", 00:37:19.561 "recv_buf_size": 4096, 00:37:19.561 "send_buf_size": 4096, 00:37:19.561 "enable_recv_pipe": true, 00:37:19.561 "enable_quickack": false, 00:37:19.561 "enable_placement_id": 0, 00:37:19.561 "enable_zerocopy_send_server": true, 00:37:19.561 "enable_zerocopy_send_client": false, 00:37:19.561 "zerocopy_threshold": 0, 00:37:19.561 "tls_version": 0, 00:37:19.561 "enable_ktls": false 00:37:19.561 } 00:37:19.561 }, 00:37:19.561 { 00:37:19.561 "method": "sock_impl_set_options", 00:37:19.561 "params": { 00:37:19.561 "impl_name": "posix", 00:37:19.561 "recv_buf_size": 2097152, 00:37:19.562 "send_buf_size": 2097152, 00:37:19.562 "enable_recv_pipe": true, 00:37:19.562 "enable_quickack": false, 00:37:19.562 "enable_placement_id": 0, 00:37:19.562 "enable_zerocopy_send_server": true, 00:37:19.562 "enable_zerocopy_send_client": false, 00:37:19.562 "zerocopy_threshold": 0, 00:37:19.562 "tls_version": 0, 00:37:19.562 "enable_ktls": false 00:37:19.562 } 00:37:19.562 } 00:37:19.562 ] 00:37:19.562 }, 00:37:19.562 { 00:37:19.562 "subsystem": "vmd", 00:37:19.562 "config": [] 00:37:19.562 }, 00:37:19.562 { 00:37:19.562 "subsystem": "accel", 00:37:19.562 "config": [ 00:37:19.562 { 00:37:19.562 "method": "accel_set_options", 00:37:19.562 "params": { 00:37:19.562 "small_cache_size": 128, 00:37:19.562 "large_cache_size": 16, 00:37:19.562 "task_count": 2048, 00:37:19.562 "sequence_count": 2048, 00:37:19.562 "buf_count": 2048 00:37:19.562 } 00:37:19.562 } 00:37:19.562 ] 00:37:19.562 }, 00:37:19.562 { 00:37:19.562 "subsystem": "bdev", 00:37:19.562 "config": [ 00:37:19.562 { 00:37:19.562 "method": "bdev_set_options", 00:37:19.562 "params": { 00:37:19.562 "bdev_io_pool_size": 65535, 00:37:19.562 "bdev_io_cache_size": 256, 00:37:19.562 "bdev_auto_examine": true, 00:37:19.562 "iobuf_small_cache_size": 128, 00:37:19.562 "iobuf_large_cache_size": 16 00:37:19.562 } 00:37:19.562 }, 00:37:19.562 { 00:37:19.562 "method": "bdev_raid_set_options", 00:37:19.562 "params": { 00:37:19.562 "process_window_size_kb": 1024 00:37:19.562 } 00:37:19.562 }, 00:37:19.562 { 00:37:19.562 "method": "bdev_iscsi_set_options", 00:37:19.562 "params": { 00:37:19.562 "timeout_sec": 30 00:37:19.562 } 00:37:19.562 }, 00:37:19.562 { 00:37:19.562 "method": "bdev_nvme_set_options", 00:37:19.562 "params": { 00:37:19.562 "action_on_timeout": "none", 00:37:19.562 "timeout_us": 0, 00:37:19.562 "timeout_admin_us": 0, 00:37:19.562 "keep_alive_timeout_ms": 10000, 00:37:19.562 "arbitration_burst": 0, 00:37:19.562 "low_priority_weight": 0, 00:37:19.562 "medium_priority_weight": 0, 00:37:19.562 "high_priority_weight": 0, 00:37:19.562 "nvme_adminq_poll_period_us": 10000, 00:37:19.562 "nvme_ioq_poll_period_us": 0, 00:37:19.562 "io_queue_requests": 512, 00:37:19.562 "delay_cmd_submit": true, 00:37:19.562 "transport_retry_count": 4, 00:37:19.562 "bdev_retry_count": 3, 00:37:19.562 "transport_ack_timeout": 0, 00:37:19.562 "ctrlr_loss_timeout_sec": 0, 00:37:19.562 "reconnect_delay_sec": 0, 00:37:19.562 "fast_io_fail_timeout_sec": 0, 00:37:19.562 "disable_auto_failback": false, 00:37:19.562 "generate_uuids": false, 00:37:19.562 "transport_tos": 0, 00:37:19.562 "nvme_error_stat": false, 00:37:19.562 "rdma_srq_size": 0, 00:37:19.562 "io_path_stat": false, 00:37:19.562 "allow_accel_sequence": false, 00:37:19.562 "rdma_max_cq_size": 0, 00:37:19.562 "rdma_cm_event_timeout_ms": 0, 00:37:19.562 "dhchap_digests": [ 00:37:19.562 "sha256", 00:37:19.562 "sha384", 00:37:19.562 "sha512" 00:37:19.562 ], 00:37:19.562 "dhchap_dhgroups": [ 00:37:19.562 "null", 00:37:19.562 "ffdhe2048", 00:37:19.562 "ffdhe3072", 00:37:19.562 "ffdhe4096", 00:37:19.562 "ffdhe6144", 00:37:19.562 "ffdhe8192" 00:37:19.562 ] 00:37:19.562 } 00:37:19.562 }, 00:37:19.562 { 00:37:19.562 "method": "bdev_nvme_attach_controller", 00:37:19.562 "params": { 00:37:19.562 "name": "nvme0", 00:37:19.562 "trtype": "TCP", 00:37:19.562 "adrfam": "IPv4", 00:37:19.562 "traddr": "127.0.0.1", 00:37:19.562 "trsvcid": "4420", 00:37:19.562 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:19.562 "prchk_reftag": false, 00:37:19.562 "prchk_guard": false, 00:37:19.562 "ctrlr_loss_timeout_sec": 0, 00:37:19.562 "reconnect_delay_sec": 0, 00:37:19.562 "fast_io_fail_timeout_sec": 0, 00:37:19.562 "psk": "key0", 00:37:19.562 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:19.562 "hdgst": false, 00:37:19.562 "ddgst": false 00:37:19.562 } 00:37:19.562 }, 00:37:19.562 { 00:37:19.562 "method": "bdev_nvme_set_hotplug", 00:37:19.562 "params": { 00:37:19.562 "period_us": 100000, 00:37:19.562 "enable": false 00:37:19.562 } 00:37:19.562 }, 00:37:19.562 { 00:37:19.562 "method": "bdev_wait_for_examine" 00:37:19.562 } 00:37:19.562 ] 00:37:19.562 }, 00:37:19.562 { 00:37:19.562 "subsystem": "nbd", 00:37:19.562 "config": [] 00:37:19.562 } 00:37:19.562 ] 00:37:19.562 }' 00:37:19.562 02:24:25 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:19.562 02:24:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:19.562 [2024-07-14 02:24:25.168611] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:19.562 [2024-07-14 02:24:25.168694] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774641 ] 00:37:19.562 EAL: No free 2048 kB hugepages reported on node 1 00:37:19.562 [2024-07-14 02:24:25.229791] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:19.822 [2024-07-14 02:24:25.320901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:19.822 [2024-07-14 02:24:25.506766] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:20.756 02:24:26 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:20.756 02:24:26 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:20.756 02:24:26 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:20.756 02:24:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.756 02:24:26 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:20.756 02:24:26 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:20.756 02:24:26 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:20.756 02:24:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:20.756 02:24:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:20.756 02:24:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:20.756 02:24:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.756 02:24:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:21.015 02:24:26 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:21.015 02:24:26 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:21.015 02:24:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:21.015 02:24:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:21.015 02:24:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:21.015 02:24:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.015 02:24:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:21.272 02:24:26 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:21.272 02:24:26 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:21.272 02:24:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:21.272 02:24:26 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:21.531 02:24:27 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:21.531 02:24:27 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:21.531 02:24:27 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.AKebcJcraG /tmp/tmp.gEdkEo966s 00:37:21.531 02:24:27 keyring_file -- keyring/file.sh@20 -- # killprocess 1774641 00:37:21.531 02:24:27 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1774641 ']' 00:37:21.531 02:24:27 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1774641 00:37:21.531 02:24:27 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:21.531 02:24:27 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:21.531 02:24:27 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1774641 00:37:21.531 02:24:27 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:21.531 02:24:27 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:21.531 02:24:27 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1774641' 00:37:21.531 killing process with pid 1774641 00:37:21.531 02:24:27 keyring_file -- common/autotest_common.sh@967 -- # kill 1774641 00:37:21.531 Received shutdown signal, test time was about 1.000000 seconds 00:37:21.531 00:37:21.531 Latency(us) 00:37:21.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.531 =================================================================================================================== 00:37:21.531 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:21.531 02:24:27 keyring_file -- common/autotest_common.sh@972 -- # wait 1774641 00:37:21.792 02:24:27 keyring_file -- keyring/file.sh@21 -- # killprocess 1773179 00:37:21.792 02:24:27 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1773179 ']' 00:37:21.792 02:24:27 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1773179 00:37:21.792 02:24:27 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:21.792 02:24:27 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:21.792 02:24:27 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1773179 00:37:21.792 02:24:27 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:21.792 02:24:27 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:21.792 02:24:27 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1773179' 00:37:21.792 killing process with pid 1773179 00:37:21.792 02:24:27 keyring_file -- common/autotest_common.sh@967 -- # kill 1773179 00:37:21.792 [2024-07-14 02:24:27.381640] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:21.792 02:24:27 keyring_file -- common/autotest_common.sh@972 -- # wait 1773179 00:37:22.359 00:37:22.359 real 0m14.069s 00:37:22.359 user 0m34.692s 00:37:22.359 sys 0m3.165s 00:37:22.359 02:24:27 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:22.359 02:24:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:22.359 ************************************ 00:37:22.359 END TEST keyring_file 00:37:22.359 ************************************ 00:37:22.359 02:24:27 -- common/autotest_common.sh@1142 -- # return 0 00:37:22.359 02:24:27 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:22.359 02:24:27 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:22.359 02:24:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:22.359 02:24:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:22.359 02:24:27 -- common/autotest_common.sh@10 -- # set +x 00:37:22.359 ************************************ 00:37:22.359 START TEST keyring_linux 00:37:22.359 ************************************ 00:37:22.359 02:24:27 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:22.359 * Looking for test storage... 00:37:22.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:22.359 02:24:27 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:22.359 02:24:27 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:22.359 02:24:27 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:22.359 02:24:27 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:22.359 02:24:27 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:22.359 02:24:27 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.359 02:24:27 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.359 02:24:27 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.359 02:24:27 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:22.359 02:24:27 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:22.359 02:24:27 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:22.359 02:24:27 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:22.359 02:24:27 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:22.359 02:24:27 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:22.359 02:24:27 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:22.359 02:24:27 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:22.359 02:24:27 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:22.359 02:24:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:22.359 02:24:27 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:22.359 02:24:27 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:22.359 02:24:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:22.359 02:24:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:22.359 02:24:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:22.359 02:24:27 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:22.359 02:24:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:22.359 02:24:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:22.359 /tmp/:spdk-test:key0 00:37:22.359 02:24:27 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:22.359 02:24:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:22.359 02:24:27 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:22.359 02:24:27 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:22.359 02:24:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:22.359 02:24:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:22.359 02:24:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:22.360 02:24:27 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:22.360 02:24:27 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:22.360 02:24:27 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:22.360 02:24:27 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:22.360 02:24:27 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:22.360 02:24:27 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:22.360 02:24:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:22.360 02:24:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:22.360 /tmp/:spdk-test:key1 00:37:22.360 02:24:27 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1775009 00:37:22.360 02:24:27 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:22.360 02:24:27 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1775009 00:37:22.360 02:24:27 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1775009 ']' 00:37:22.360 02:24:27 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:22.360 02:24:27 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:22.360 02:24:27 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:22.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:22.360 02:24:27 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:22.360 02:24:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:22.360 [2024-07-14 02:24:28.005335] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:22.360 [2024-07-14 02:24:28.005418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775009 ] 00:37:22.360 EAL: No free 2048 kB hugepages reported on node 1 00:37:22.617 [2024-07-14 02:24:28.085157] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.617 [2024-07-14 02:24:28.178284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:22.875 02:24:28 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:22.875 02:24:28 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:22.875 02:24:28 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:22.875 02:24:28 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.875 02:24:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:22.875 [2024-07-14 02:24:28.397957] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:22.875 null0 00:37:22.875 [2024-07-14 02:24:28.430017] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:22.875 [2024-07-14 02:24:28.430473] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:22.875 02:24:28 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.875 02:24:28 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:22.875 92341398 00:37:22.875 02:24:28 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:22.875 32485961 00:37:22.875 02:24:28 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1775046 00:37:22.875 02:24:28 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:22.875 02:24:28 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1775046 /var/tmp/bperf.sock 00:37:22.875 02:24:28 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1775046 ']' 00:37:22.875 02:24:28 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:22.875 02:24:28 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:22.875 02:24:28 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:22.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:22.875 02:24:28 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:22.875 02:24:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:22.875 [2024-07-14 02:24:28.495092] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:22.875 [2024-07-14 02:24:28.495190] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775046 ] 00:37:22.875 EAL: No free 2048 kB hugepages reported on node 1 00:37:22.875 [2024-07-14 02:24:28.557251] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.132 [2024-07-14 02:24:28.641160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:23.132 02:24:28 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:23.132 02:24:28 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:23.132 02:24:28 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:23.132 02:24:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:23.388 02:24:28 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:23.388 02:24:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:23.645 02:24:29 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:23.645 02:24:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:23.903 [2024-07-14 02:24:29.503993] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:23.903 nvme0n1 00:37:23.903 02:24:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:23.903 02:24:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:23.903 02:24:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:24.161 02:24:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:24.161 02:24:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:24.161 02:24:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:24.161 02:24:29 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:24.161 02:24:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:24.419 02:24:29 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:24.419 02:24:29 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:24.419 02:24:29 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:24.419 02:24:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:24.419 02:24:29 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:24.677 02:24:30 keyring_linux -- keyring/linux.sh@25 -- # sn=92341398 00:37:24.677 02:24:30 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:24.677 02:24:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:24.677 02:24:30 keyring_linux -- keyring/linux.sh@26 -- # [[ 92341398 == \9\2\3\4\1\3\9\8 ]] 00:37:24.677 02:24:30 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 92341398 00:37:24.677 02:24:30 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:24.677 02:24:30 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:24.677 Running I/O for 1 seconds... 00:37:25.609 00:37:25.609 Latency(us) 00:37:25.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.609 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:25.609 nvme0n1 : 1.03 3423.15 13.37 0.00 0.00 36922.21 14757.74 54758.97 00:37:25.609 =================================================================================================================== 00:37:25.609 Total : 3423.15 13.37 0.00 0.00 36922.21 14757.74 54758.97 00:37:25.609 0 00:37:25.609 02:24:31 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:25.609 02:24:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:25.867 02:24:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:25.867 02:24:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:25.867 02:24:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:25.867 02:24:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:25.867 02:24:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:25.867 02:24:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:26.125 02:24:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:26.125 02:24:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:26.125 02:24:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:26.125 02:24:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:26.125 02:24:31 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:26.125 02:24:31 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:26.125 02:24:31 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:26.125 02:24:31 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:26.125 02:24:31 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:26.125 02:24:31 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:26.125 02:24:31 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:26.125 02:24:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:26.382 [2024-07-14 02:24:32.022189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1717860 (107): Transport endpoint is not connected 00:37:26.382 [2024-07-14 02:24:32.022217] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:26.382 [2024-07-14 02:24:32.023195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1717860 (9): Bad file descriptor 00:37:26.382 [2024-07-14 02:24:32.024182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:26.382 [2024-07-14 02:24:32.024210] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:26.382 [2024-07-14 02:24:32.024235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:26.382 request: 00:37:26.382 { 00:37:26.382 "name": "nvme0", 00:37:26.382 "trtype": "tcp", 00:37:26.382 "traddr": "127.0.0.1", 00:37:26.382 "adrfam": "ipv4", 00:37:26.382 "trsvcid": "4420", 00:37:26.382 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:26.382 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:26.382 "prchk_reftag": false, 00:37:26.382 "prchk_guard": false, 00:37:26.382 "hdgst": false, 00:37:26.382 "ddgst": false, 00:37:26.382 "psk": ":spdk-test:key1", 00:37:26.382 "method": "bdev_nvme_attach_controller", 00:37:26.382 "req_id": 1 00:37:26.382 } 00:37:26.382 Got JSON-RPC error response 00:37:26.382 response: 00:37:26.382 { 00:37:26.382 "code": -5, 00:37:26.382 "message": "Input/output error" 00:37:26.382 } 00:37:26.382 02:24:32 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:26.383 02:24:32 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:26.383 02:24:32 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:26.383 02:24:32 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:26.383 02:24:32 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:26.383 02:24:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:26.383 02:24:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:26.383 02:24:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:26.383 02:24:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:26.383 02:24:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:26.383 02:24:32 keyring_linux -- keyring/linux.sh@33 -- # sn=92341398 00:37:26.383 02:24:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 92341398 00:37:26.383 1 links removed 00:37:26.383 02:24:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:26.383 02:24:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:26.383 02:24:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:26.383 02:24:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:26.383 02:24:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:26.383 02:24:32 keyring_linux -- keyring/linux.sh@33 -- # sn=32485961 00:37:26.383 02:24:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 32485961 00:37:26.383 1 links removed 00:37:26.383 02:24:32 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1775046 00:37:26.383 02:24:32 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1775046 ']' 00:37:26.383 02:24:32 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1775046 00:37:26.383 02:24:32 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:26.383 02:24:32 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:26.383 02:24:32 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1775046 00:37:26.641 02:24:32 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:26.641 02:24:32 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:26.641 02:24:32 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1775046' 00:37:26.641 killing process with pid 1775046 00:37:26.641 02:24:32 keyring_linux -- common/autotest_common.sh@967 -- # kill 1775046 00:37:26.641 Received shutdown signal, test time was about 1.000000 seconds 00:37:26.641 00:37:26.641 Latency(us) 00:37:26.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.641 =================================================================================================================== 00:37:26.641 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:26.641 02:24:32 keyring_linux -- common/autotest_common.sh@972 -- # wait 1775046 00:37:26.641 02:24:32 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1775009 00:37:26.641 02:24:32 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1775009 ']' 00:37:26.641 02:24:32 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1775009 00:37:26.641 02:24:32 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:26.641 02:24:32 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:26.641 02:24:32 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1775009 00:37:26.641 02:24:32 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:26.641 02:24:32 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:26.641 02:24:32 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1775009' 00:37:26.641 killing process with pid 1775009 00:37:26.641 02:24:32 keyring_linux -- common/autotest_common.sh@967 -- # kill 1775009 00:37:26.641 02:24:32 keyring_linux -- common/autotest_common.sh@972 -- # wait 1775009 00:37:27.207 00:37:27.208 real 0m4.847s 00:37:27.208 user 0m9.133s 00:37:27.208 sys 0m1.455s 00:37:27.208 02:24:32 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:27.208 02:24:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:27.208 ************************************ 00:37:27.208 END TEST keyring_linux 00:37:27.208 ************************************ 00:37:27.208 02:24:32 -- common/autotest_common.sh@1142 -- # return 0 00:37:27.208 02:24:32 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:27.208 02:24:32 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:27.208 02:24:32 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:27.208 02:24:32 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:27.208 02:24:32 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:27.208 02:24:32 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:27.208 02:24:32 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:27.208 02:24:32 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:27.208 02:24:32 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:27.208 02:24:32 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:27.208 02:24:32 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:27.208 02:24:32 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:27.208 02:24:32 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:27.208 02:24:32 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:27.208 02:24:32 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:27.208 02:24:32 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:27.208 02:24:32 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:27.208 02:24:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:27.208 02:24:32 -- common/autotest_common.sh@10 -- # set +x 00:37:27.208 02:24:32 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:27.208 02:24:32 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:27.208 02:24:32 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:27.208 02:24:32 -- common/autotest_common.sh@10 -- # set +x 00:37:29.104 INFO: APP EXITING 00:37:29.104 INFO: killing all VMs 00:37:29.104 INFO: killing vhost app 00:37:29.104 INFO: EXIT DONE 00:37:30.039 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:30.039 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:30.039 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:30.039 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:30.039 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:30.039 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:30.039 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:30.039 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:30.039 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:30.039 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:30.039 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:30.039 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:30.039 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:30.039 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:30.039 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:30.039 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:30.039 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:31.441 Cleaning 00:37:31.441 Removing: /var/run/dpdk/spdk0/config 00:37:31.441 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:31.441 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:31.441 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:31.441 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:31.441 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:31.441 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:31.441 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:31.441 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:31.441 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:31.441 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:31.441 Removing: /var/run/dpdk/spdk1/config 00:37:31.441 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:31.441 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:31.441 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:31.441 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:31.441 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:31.441 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:31.441 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:31.441 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:31.441 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:31.441 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:31.441 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:31.441 Removing: /var/run/dpdk/spdk2/config 00:37:31.441 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:31.441 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:31.441 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:31.441 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:31.441 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:31.441 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:31.441 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:31.441 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:31.441 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:31.441 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:31.441 Removing: /var/run/dpdk/spdk3/config 00:37:31.441 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:31.441 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:31.441 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:31.441 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:31.441 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:31.441 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:31.441 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:31.441 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:31.441 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:31.441 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:31.441 Removing: /var/run/dpdk/spdk4/config 00:37:31.441 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:31.441 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:31.441 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:31.441 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:31.441 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:31.441 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:31.441 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:31.441 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:31.441 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:31.441 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:31.441 Removing: /dev/shm/bdev_svc_trace.1 00:37:31.441 Removing: /dev/shm/nvmf_trace.0 00:37:31.441 Removing: /dev/shm/spdk_tgt_trace.pid1453478 00:37:31.441 Removing: /var/run/dpdk/spdk0 00:37:31.441 Removing: /var/run/dpdk/spdk1 00:37:31.441 Removing: /var/run/dpdk/spdk2 00:37:31.441 Removing: /var/run/dpdk/spdk3 00:37:31.441 Removing: /var/run/dpdk/spdk4 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1451918 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1452647 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1453478 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1453897 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1454588 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1454730 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1455448 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1455463 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1455701 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1457006 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1457939 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1458236 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1458427 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1458636 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1458825 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1458986 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1459139 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1459325 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1459628 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1461996 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1462160 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1462326 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1462453 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1462763 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1462831 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1463197 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1463206 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1463537 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1463568 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1463781 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1463903 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1464277 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1464433 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1464944 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1465293 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1465334 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1465510 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1465673 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1465903 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1466102 00:37:31.441 Removing: /var/run/dpdk/spdk_pid1466258 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1466410 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1466684 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1466840 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1467004 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1467173 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1467433 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1467589 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1467747 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1468014 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1468172 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1468336 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1468492 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1468767 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1468927 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1469090 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1469362 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1469434 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1469638 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1471804 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1525910 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1528528 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1535362 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1538645 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1540996 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1541518 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1545353 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1549074 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1549076 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1549725 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1550341 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1550922 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1551321 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1551326 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1551583 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1551713 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1551722 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1552294 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1552913 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1553565 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1554084 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1554089 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1554350 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1555729 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1556451 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1561810 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1561969 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1564563 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1568157 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1570319 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1576582 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1581772 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1583019 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1583736 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1594412 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1596503 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1621920 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1624706 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1625879 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1627188 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1627213 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1627342 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1627478 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1627793 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1629109 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1629826 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1630134 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1631747 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1632173 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1632615 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1635118 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1638371 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1641903 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1665364 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1668036 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1671914 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1673357 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1674551 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1676983 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1679336 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1683416 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1683525 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1686287 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1686440 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1686576 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1686862 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1686867 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1687965 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1689228 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1690414 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1691595 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1692769 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1693946 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1697748 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1698085 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1699364 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1700101 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1703899 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1706396 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1709702 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1713010 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1719230 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1723582 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1723682 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1735792 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1736291 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1736703 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1737108 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1737719 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1738202 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1738961 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1739640 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1742019 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1742271 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1745946 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1746111 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1747721 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1752961 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1752972 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1755853 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1757249 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1758647 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1759393 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1760794 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1761661 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1767152 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1767502 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1767893 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1769447 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1769894 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1770239 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1773179 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1773192 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1774641 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1775009 00:37:31.709 Removing: /var/run/dpdk/spdk_pid1775046 00:37:31.709 Clean 00:37:31.968 02:24:37 -- common/autotest_common.sh@1451 -- # return 0 00:37:31.968 02:24:37 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:31.968 02:24:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:31.968 02:24:37 -- common/autotest_common.sh@10 -- # set +x 00:37:31.968 02:24:37 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:31.968 02:24:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:31.968 02:24:37 -- common/autotest_common.sh@10 -- # set +x 00:37:31.968 02:24:37 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:31.968 02:24:37 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:31.968 02:24:37 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:31.968 02:24:37 -- spdk/autotest.sh@391 -- # hash lcov 00:37:31.968 02:24:37 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:31.968 02:24:37 -- spdk/autotest.sh@393 -- # hostname 00:37:31.968 02:24:37 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:32.227 geninfo: WARNING: invalid characters removed from testname! 00:38:04.292 02:25:05 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:05.230 02:25:10 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:09.419 02:25:14 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:12.705 02:25:18 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:15.992 02:25:21 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:18.559 02:25:24 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:21.849 02:25:27 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:21.849 02:25:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:21.849 02:25:27 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:21.849 02:25:27 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:21.849 02:25:27 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:21.849 02:25:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.849 02:25:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.849 02:25:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.849 02:25:27 -- paths/export.sh@5 -- $ export PATH 00:38:21.849 02:25:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.849 02:25:27 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:21.849 02:25:27 -- common/autobuild_common.sh@444 -- $ date +%s 00:38:21.849 02:25:27 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720916727.XXXXXX 00:38:21.849 02:25:27 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720916727.eK10JV 00:38:21.849 02:25:27 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:38:21.849 02:25:27 -- common/autobuild_common.sh@450 -- $ '[' -n v23.11 ']' 00:38:21.849 02:25:27 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:21.849 02:25:27 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:21.849 02:25:27 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:21.849 02:25:27 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:21.849 02:25:27 -- common/autobuild_common.sh@460 -- $ get_config_params 00:38:21.849 02:25:27 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:38:21.849 02:25:27 -- common/autotest_common.sh@10 -- $ set +x 00:38:21.849 02:25:27 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:21.849 02:25:27 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:38:21.849 02:25:27 -- pm/common@17 -- $ local monitor 00:38:21.849 02:25:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:21.849 02:25:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:21.849 02:25:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:21.849 02:25:27 -- pm/common@21 -- $ date +%s 00:38:21.849 02:25:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:21.849 02:25:27 -- pm/common@21 -- $ date +%s 00:38:21.849 02:25:27 -- pm/common@25 -- $ sleep 1 00:38:21.849 02:25:27 -- pm/common@21 -- $ date +%s 00:38:21.849 02:25:27 -- pm/common@21 -- $ date +%s 00:38:21.849 02:25:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720916727 00:38:21.849 02:25:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720916727 00:38:21.849 02:25:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720916727 00:38:21.849 02:25:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720916727 00:38:21.849 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720916727_collect-vmstat.pm.log 00:38:21.849 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720916727_collect-cpu-load.pm.log 00:38:21.849 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720916727_collect-cpu-temp.pm.log 00:38:21.850 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720916727_collect-bmc-pm.bmc.pm.log 00:38:22.785 02:25:28 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:38:22.785 02:25:28 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:22.785 02:25:28 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:22.785 02:25:28 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:22.785 02:25:28 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:22.785 02:25:28 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:22.785 02:25:28 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:22.785 02:25:28 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:22.785 02:25:28 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:22.785 02:25:28 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:22.785 02:25:28 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:22.785 02:25:28 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:22.785 02:25:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:22.785 02:25:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:22.785 02:25:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:22.785 02:25:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:22.785 02:25:28 -- pm/common@44 -- $ pid=1786269 00:38:22.785 02:25:28 -- pm/common@50 -- $ kill -TERM 1786269 00:38:22.785 02:25:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:22.785 02:25:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:22.785 02:25:28 -- pm/common@44 -- $ pid=1786271 00:38:22.785 02:25:28 -- pm/common@50 -- $ kill -TERM 1786271 00:38:22.785 02:25:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:22.785 02:25:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:22.785 02:25:28 -- pm/common@44 -- $ pid=1786273 00:38:22.785 02:25:28 -- pm/common@50 -- $ kill -TERM 1786273 00:38:22.785 02:25:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:22.785 02:25:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:22.785 02:25:28 -- pm/common@44 -- $ pid=1786304 00:38:22.785 02:25:28 -- pm/common@50 -- $ sudo -E kill -TERM 1786304 00:38:22.785 + [[ -n 1347222 ]] 00:38:22.785 + sudo kill 1347222 00:38:22.796 [Pipeline] } 00:38:22.821 [Pipeline] // stage 00:38:22.827 [Pipeline] } 00:38:22.850 [Pipeline] // timeout 00:38:22.856 [Pipeline] } 00:38:22.879 [Pipeline] // catchError 00:38:22.884 [Pipeline] } 00:38:22.903 [Pipeline] // wrap 00:38:22.910 [Pipeline] } 00:38:22.927 [Pipeline] // catchError 00:38:22.936 [Pipeline] stage 00:38:22.939 [Pipeline] { (Epilogue) 00:38:22.955 [Pipeline] catchError 00:38:22.957 [Pipeline] { 00:38:22.973 [Pipeline] echo 00:38:22.975 Cleanup processes 00:38:22.982 [Pipeline] sh 00:38:23.270 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:23.270 1786431 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:23.270 1786534 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:23.285 [Pipeline] sh 00:38:23.571 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:23.572 ++ grep -v 'sudo pgrep' 00:38:23.572 ++ awk '{print $1}' 00:38:23.572 + sudo kill -9 1786431 00:38:23.585 [Pipeline] sh 00:38:23.869 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:36.078 [Pipeline] sh 00:38:36.367 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:36.367 Artifacts sizes are good 00:38:36.384 [Pipeline] archiveArtifacts 00:38:36.402 Archiving artifacts 00:38:36.654 [Pipeline] sh 00:38:36.940 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:36.955 [Pipeline] cleanWs 00:38:36.967 [WS-CLEANUP] Deleting project workspace... 00:38:36.967 [WS-CLEANUP] Deferred wipeout is used... 00:38:36.974 [WS-CLEANUP] done 00:38:36.976 [Pipeline] } 00:38:36.997 [Pipeline] // catchError 00:38:37.010 [Pipeline] sh 00:38:37.291 + logger -p user.info -t JENKINS-CI 00:38:37.300 [Pipeline] } 00:38:37.323 [Pipeline] // stage 00:38:37.330 [Pipeline] } 00:38:37.350 [Pipeline] // node 00:38:37.356 [Pipeline] End of Pipeline 00:38:37.392 Finished: SUCCESS